Advances in Artificial Intelligence

Advances in Artificial Intelligence / 2017 / Article

Research Article | Open Access

Volume 2017 |Article ID 3497652 | https://doi.org/10.1155/2017/3497652

Tadikonda Venkata Bharat, "Selection and Configuration of Sorption Isotherm Models in Soils Using Artificial Bees Guided by the Particle Swarm", Advances in Artificial Intelligence, vol. 2017, Article ID 3497652, 22 pages, 2017. https://doi.org/10.1155/2017/3497652

Selection and Configuration of Sorption Isotherm Models in Soils Using Artificial Bees Guided by the Particle Swarm

Academic Editor: David Glass
Received23 Mar 2016
Revised10 Oct 2016
Accepted22 Nov 2016
Published18 Jan 2017

Abstract

A precise estimation of isotherm model parameters and selection of isotherms from the measured data are essential for the fate and transport of toxic contaminants in the environment. Nonlinear least-square techniques are widely used for fitting the isotherm model on the experimental data. However, such conventional techniques pose several limitations in the parameter estimation and the choice of appropriate isotherm model as shown in this paper. It is demonstrated in the present work that the classical deterministic techniques are sensitive to the initial guess and thus the performance is impeded by the presence of local optima. A novel solver based on modified artificial bee-colony (MABC) algorithm is proposed in this work for the selection and configuration of appropriate sorption isotherms. The performance of the proposed solver is compared with the other three solvers based on swarm intelligence for model parameter estimation using measured data from 21 soils. Performance comparison of developed solvers on the measured data reveals that the proposed solver demonstrates excellent convergence capabilities due to the superior exploration-exploitation abilities. The estimated solutions by the proposed solver are almost identical to the mean fitness values obtained over 20 independent runs. The advantages of the proposed solver are presented.

1. Introduction

The fate and transport of heavy metals in the soils are a serious concern due to their potential impact on the environment. These reactive substances in soils undergo complex interactions based on the soil properties, type of clay minerals, and pore-fluid parameters. The problem of identifying the fate and transport of these substances in soils requires the understanding of various sorption mechanisms of species in the soil environment [1]. The sorption reactions in soils are, therefore, important in contaminant transport modeling in the subsurface and containment facilities such as landfills. Contaminant sorption process of different solutes in the soils is generally described using isotherms. Sorption isotherms describe the concentration of the sorbed solute to the sorbent concentration at equilibrium. Several linear and nonlinear isotherms are commonly used in various fields of research to understand the sorption mechanism between sorbent (soil solids) and solute [24]. Precise selection and configuration of sorption isotherm for a given sorbent-solute combination are essential in the contaminant risk assessment. The selection and configuration depend on chosen isotherm model, experimental representation of the environmental conditions, and parameter estimation method [5]. The sorption model parameters and selection of appropriate isotherm models are accomplished by fitting isotherm models to the experimental data using regression [6]. Nonlinear least-square techniques are widely used for this purpose [4, 5, 7, 8]. However, it will be shown in this work that classical gradient-based techniques suffer from several limitations.

Nature-inspired algorithms such as algorithms based on swarm intelligence (SI) are global search techniques and are widely used for many optimization problems in engineering. Particle swarm optimization (PSO) [9] and artificial bee colony (ABC) [10] are commonly used SI techniques. These algorithms overcome several limitations of conventional, deterministic techniques. Thus, these global search techniques are widely applied in geotechnical engineering for several optimization problems [1117]. However, these techniques often prematurely converge to local optimum solution and may not generate the same solution in different runs due to stochastic nature of the algorithm [18]. A hybrid method based on particle swarm optimization and gradient-based algorithm is often used for configuration of sorption isotherms [19]. However, the limitations associated with the slow convergence rate in early stage of the search and lack of diversity in later stage of the search may remain in the basic particle swarm optimization [20, 21]. Therefore, hybrid methods may not balance the limitations that arose from each other. Such hybrid methods are also computationally expensive as they require both gradient information and more number of objective function evaluations.

It will be shown that the performance of conventional optimization techniques and PSO algorithm is greatly hampered by the presence of several local optima on the functional terrain. An inverse model based on artificial bee-colony (ABC) optimization method is proposed in this work for selection and configuration of appropriate sorption isotherms using the experimental sorption data. The proposed solver is further improved by introducing several modifications to the ABC algorithm for better performance. The proposed technique is highly robust; it accurately estimates the sorption model parameters and appropriate isotherms to the experimental data.

2. Theory

Sorption of different chemical constituents in soils is routinely performed in the laboratory under controlled temperature, pressure, and pH. The measured sorption data is generally described by either equilibrium or a kinetic retention process [2]. The present study is limited to only modeling of equilibrium retention process. The measured, equilibrium sorption data is represented as sorbed concentration, , against the equilibrium concentration in the aqueous phase, . Sorption isotherms describe the mathematical dependency of sorbed concentration on the equilibrium concentration. The determination of the sorption model parameters from this sorption data is essential for the design of containment facilities. Further, configuration of the observed data using available models is important in the modeling of contaminant transport and sorption processes for containment applications. Sorption isotherms are classified into S-curve, L-curve, H-curve, and C-curve based on the initial slope of the curve [2224]. A thorough description of various sorption isotherm models is given elsewhere [2427]. Some of the commonly used isotherms in geoenvironmental engineering applications are given below.

2.1. Freundlich Isotherm

The Freundlich isotherm is widely used for describing nonideal and reversible adsorption [27, 28]. This isotherm is applied in heterogeneous systems such as soils. The Freundlich isotherm can be expressed as where is the amount of solute retained by the soil (μg/g), is the solute concentration in solution (μg/mL), is the distribution coefficient (mL/g), and the parameter is dimensionless constant. The distribution coefficient describes the partitioning of solute species between solid and liquid phases [2]. The linear sorption equation is a particular case of Freundlich expression and can be derived by substituting .

2.2. Langmuir Isotherm

The Langmuir model is another popularly used sorption model, originally developed by Langmuir [29] to describe the adsorption of gases by solids. This isotherm model is widely used in geoenvironmental applications. For example, the model is recently applied for understanding the phosphorus contamination in soils [30], ammonium removal using zeolites [31], and phenol adsorption in natural soils [32]. The advantage of this model is that a maximum sorptive capacity is used in the model, which is regarded as a measure of the available sorption sites on the solid phase. The Langmuir isotherm is defined as where and are the fitting parameters. The parameter (mL/g) is a measure of the bond strength at which the sorbed solutes are held on the soil surface and (μg/g) is the maximum sorption capacity. This isotherm is extensively used to describe the sorption of solutes by soils [5]. Often various modifications to (1) and (2) are used [23] as some sorption data do not follow the standard isotherms. These modified forms are presented in the following subsections.

2.3. Freundlich-Langmuir Model

The Freundlich-Langmuir (FL) model is a power function based on the assumption of continuously distributed affinity coefficients [4, 23]. This model can be described aswhere is the fitting parameter (0 < ≤ 1). The Langmuir isotherm is a special case of this model and can be derived by substituting = 1. The FL model is widely used for simulating the sorption behaviour of heavy metals on clay minerals [33, 34].

2.4. Two-Site Langmuir Model

The two-site Langmuir model is based on the assumption that sorption occurs on two types of surface sites, each with different bonding energies. One site contains a high bonding energy and reacts rapidly, while the other contains a lower bonding energy and reacts more slowly [35]. The modified Langmuir model with two sorbing components is given by where and are the maximum quantities of solute that can be sorbed by the two sites; and and are constants related to the bonding energies of the components. Several researchers obtained excellent fit of phosphate sorption data with the two-site Langmuir model [36] and metal sorption, that is, Lanthanide on smectite clays [37, 38]. Several other isotherm models such as Farley–Dzombak–Morel [39] and Tóth [40] are often used, but the modeling is out of the scope of this work.

3. Estimation of Model Parameters

Accurate estimation of model parameters of (1) through (4) is required for the prediction of fate and transport of contaminants in the environment. These model parameters are frequently estimated by nonlinear, least-square approaches by minimizing the objective function. The objective function is an error measure which is used to determine the accuracy of a given model to describe the measured data. Several error measures, such as coefficient of determination (), sum of squared deviations (SSD), mean error (MSE), and root mean square error (RMSE), are commonly used in the inverse analysis [27]. The RMSE is often used to verify the appropriateness of the model to the observed data and is given bywhere and are the observed and fitted sorption values, respectively, and is the number of observed data.

As the aforementioned theoretical isotherms are nonlinear, optimization techniques are used to estimate the model parameters of a given model isotherm and to determine the appropriate isotherm for a given data. Nonlinear regression techniques such as Levenberg–Marquardt method are widely used in the literature for parameterization [34, 41, 42]. A brief description of gradient-based technique is given here.

3.1. Gradient-Based Approach

Gradient-based techniques are the conventional optimization tools for parameter estimation in many engineering applications. The principle behind such techniques is determining an optimum point on the search space where the derivatives of the objective function with respect to the model parameters () become zero. The unknown model parameter vector (), in these methods, is determined iteratively. The direction of descent for the gradient-based method is based onwhere is a scalar which determines the step length in the direction of . Several variants of gradient-based algorithms are used that may require the information of second derivative of the objective function (Hessian matrix) for a faster convergence rate.

It was in this work that these techniques suffered from several limitations. These conventional algorithms are highly sensitive to the fitness function (error measure) and on the initial guess. Of late, nature-inspired or swarm intelligence search techniques are used as an alternative to the conventional methods for solving several optimization problems in geoenvironmental engineering [1113, 1518].

3.2. Swarm Intelligence

Swarm intelligence (SI) is a field of research which develops computational techniques by taking inspiration from the nature for solving the optimization problems. Genetic algorithms (GA), particle swarm optimization (PSO), and artificial bee colony (ABC) are some of the techniques developed by mimicking the nature for the determination of global optimum solutions to the inverse problems. This paper developed several solvers based on some of the SI techniques for predicting the model parameters of isotherm equations and for determining the appropriate isotherms for the measured data. The following SI techniques were used in developing the solvers.

3.2.1. Particle Swarm Optimization

PSO is a class of stochastic, derivative-free, population-based method. The working principle of PSO method is inspired by the social behaviour of animals such as flocking of birds and schooling of fishes [9]. In PSO, each particle (i.e., agent) of the population (i.e., swarm) is thought to be a collision-free bird in an -dimensional search space, where is the number of model parameters to be optimized. PSO algorithm starts with initializing predefined number of particles in the search space represented by the objective function. Each particle is updated with a position and velocity at any given iteration in the search space. The position and velocity are updated based on the flying experience (movement) of individual particles and the neighbors to achieve a better position in the search space. Further, the best location of an individual particle in the history (all the previous iterations) and best location experienced in comparison to all the particles in any given iteration are rememorized. Therefore, the movement of the particle is an aggregated acceleration towards its best previously visited location and towards the best individual of a topological neighborhood [43]. The velocity () and movement () of the th particle in the th dimension are updated using [44, 45]where is the history best position of each individual particle; is the global-best position; and are the acceleration constants that accelerate movement of the particles towards individual and global-best positions, respectively; and are two independent random numbers that are uniformly distributed between 0 and 1 used to stochastically vary the relative pull between and . The randomness is introduced using the constants for providing unpredictable behaviour of the swarm which is useful for preventing from premature convergence. This iterative process continues swarm by swarm until the predefined number of iterations. Individual particles are drawn towards the success of each other during the search process which results in the formation of a cluster of particles at the global optimum region.

PSO is widely used for solving the optimization problems in many engineering disciplines due to the simplicity. Several experiment studies on the application of PSO algorithm on benchmark functions and optimization problems revealed that the algorithm explodes quickly as the first term in the velocity term in (7) increases incessantly [45]. The notable work on improving the update equation are introduction of the inertia factor to control the velocity of the particles [44] and constriction factor which constrains the magnitude of the velocity [45]. The modified form of velocity-update equation is [12, 44, 45]where is the constriction factor and is the inertia weight to improve the performance of the PSO algorithm. The first term in (9) is the inertia term to restrict the particles to change the flight direction drastically. Considerable improvements to (9) are available in the literature for improving the performance of the PSO algorithm. The present work uses perturbed PSO algorithm for developing the solver.

3.2.2. Variant of PSO Algorithm

The classical PSO algorithm and many of the variants suffer from few limitations during the search process. The major drawback with these algorithms is that they converge to suboptimal solutions for several problems due to the application of the same movement update equation (10) for particle [18]. Bharat et al. [18] proposed perturbed PSO (PPSO) algorithm which uses the following perturbation equation for the particle:where is a constant and .

Equation (10) is used to randomly perturb the global-best particle in the suboptimal region to avoid convergence. The performance of the PPSO algorithm is further improved using catfish particles [46] into the swarm [14]. The particles are arranged in descending order of their fitness values after few generations (catIter). A known percentage of low-fit solutions are chosen as catfish particles and are repositioned on the -dimensional search space based on opposition-based learning (OBL). The OBL technique is based on the utilization of opposition numbers of the current positions in the search space [47]. This technique is explained in the following subsections. The catfish particles improve the efficiency by exploring the new feasible regions in the search space. This optimization algorithm is referred to as PCPSO algorithm in this work. PCPSO algorithm is believed to be alleviating premature convergence of the swarm even with small population due to unique combination of perturbation equation for particle and exploration capabilities of the catfish particles [14]. Two other solvers based on SI algorithms are also developed in this work.

3.2.3. Artificial Bee-Colony (ABC) Algorithm

ABC algorithm is a recent addition to the class of global optimization algorithms. ABC algorithm [10] is inspired by the foraging behaviour of bees. It is shown recently on over 50 benchmark functions (unimodal to multimodal, separable to nonseparable functions) that ABC algorithm outperforms other popular SI techniques such as genetic algorithm (GA), differential evolution (DE), and particle swarm optimization (PSO) [48] due to better trade-off between exploration and exploitation.

The algorithm uses several steps inspired by the collective behaviour of bees for the collection and processing of the nectar. The basic ABC algorithm uses three classes of bees, that is, employed bees, onlooker bees, and scout bees. Employed bees are responsible for locating the nectar sources and sharing this information with the onlookers on the dance area of the hive. Onlooker bees select nectar source based on the quality (objective function) with some probability. Therefore, the probability of higher number of onlookers at better quality of food source is higher. The task of employed and onlooker bees is to explore the search space. Scout bees are similar to the catfishes used to exploit new food sources in the search space by random search process. Further, the employed bees with abandoned food sources become the scout bees. Each food source (a set of model parameters) is a feasible solution of the problem and the nectar amount of a food source represents quality of the solution, that is, the objective function. The quality of the solution is represented by fitness value as given by where is the objective function (5) and is the model parameter vector of the problem. Half of the colony is assumed to be comprised of employed bees and the other half by the onlookers. Each food source is exploited by only one employed bee. Therefore, the number of employed or the onlooker bees is equal to the number of food sources. The following four steps are used in the algorithm.

(i) Initialization Phase. A predefined number of food sources are randomly generated in the -dimensional parametric space where “” is the dimension of the parametric space representing the number of parameters to be optimized. The food sources are generated usingwhere ( is the number of food sources and equals half of the colony size); ( is the dimension of the problem); and are the lower and upper bounds of the th parameter and th food source, respectively. The fitness of all the food sources is evaluated using a suitable fitness norm. Further, counters which store the number of trails of solutions are reset to 0 in this phase.

(ii) Employed Bees Phase. Each employed bee produces a modification to the solution in its memory depending on the local (visual) information. The modification to the present food source is determined using the following equation:where represents the inertia factor (usually used in canonical ABC), represents a randomly selected food source different from , and is a uniformly distributed random number in the range . The fitness of the newly discovered solution is computed. A greedy selection process is applied between the new () and the original () solution. The better solution is stored in the memory. The trails count for this food will be reset to 0 if the solution is improved; otherwise, its value is increased by 1. Employed bees share the nectar (solution) information with the onlooker bees on the dance area after this phase.

(iii) Onlooker Bees Phase. The food sources are chosen based on a probability of its nectar amount (quality of the objective function) in this phase. A high number of onlooker bees are swarmed around the same food source when the fitness of the source is high. The probability is calculated according towhere the fitness of each agent is calculated according to (11). After this stage, each onlooker bee finds a new food source in the neighborhood based on (13), similar to the employed bees. The new position is replaced with the old one in the memory when the quality of the new position is better than that of the previous one. Otherwise, the bee keeps the old position in memory and the trail count is raised by 1.

(iv) Scout Bees Phase. A food source is abandoned and a new one is selected when the food source does not improve the quality in a predetermined number of iterations, called the “limit.” The new food source is selected randomly in the search space using (2).

Some modifications to the original ABC algorithm have been studied on benchmark functions. The modification of the basic ABC involves modifying the position-update equation [49] which was used recently by Zhu and Kwong [49] on six benchmark functions and found favorable results. However, the convergence towards gbest solution cannot be controlled in the Zhu and Kwong [49] work due to lack of acceleration factors. Apart from the modification to position-update equation, the frequency and magnitude of the perturbation are modified [50, 51]. The OBL was used by Gao and Liu [52] for the initialization phase on benchmark functions. Hybrid techniques combining crossover operations of GA with ABC is used recently by Karaboga and Kaya [53].

3.2.4. Modified ABC (MABC) Algorithm

The exploration-exploitation qualities of the classical ABC algorithm are improved in the present work. The following position-update equation is used for the employed bees in the modified algorithm:where is the random number varying within , is a constant, and is the acceleration coefficient. The perturbation equation in (15) produces perturbed solutions around the present solution in randomly chosen direction of another food source. The information of fittest solutions is not required in this method as the food sources are randomly selected to generate the new solutions.

The movement of the employed bees using the proposed position-update equation is based on the resultant direction of best-fit food source (position) and the randomly chosen food source location. The inertia weight and acceleration coefficient improve the exploration process. The position-update equation for onlooker bees is based on (13) with an adaptive inertia factor. This helps in carrying out the exploitation process for better food sources around the present positions. Further, opposition-based concept is used to generate new food sources for the scout bees.

OBL is based on the utilization of opposition numbers of the current positions in the search space [47]. If is a food source position in the -dimensional space, and , , the new position based on the OBL is of which is defined aswhere and are the lower and upper limits of the th dimension.

4. Inverse Analysis

The performance of all the SI techniques is based on the tuning parameters. The tuning parameters of these algorithms are very sensitive to the nature of the parametric space [14, 18, 54]. The tuning parameters of all the four algorithms and the measured data are defined in the following subsection.

4.1. Experimental Sorption Data

Observed sorption data of metal substances, that is, Cu, Zn, Ca, and Pb, on various soils were used from the literature for selection and configuration of the described isotherms. Natural soils having various percentages of clay content, different clay minerals, and pH were chosen. The physical properties of the soils as reported in the literature were presented in Table 1. The observed data contains sorbed concentration of the metal species against the corresponding equilibrium concentrations. The details of the experimental data as reported in the literature are given in Table 2.


#SoilReferencepHSoil typeCECGrain size (%)
SandSiltClay

1AlligatorBuchter et al., 19894.8Clay, Montmorillo-nitic30.35.939.454.7
2McLarenSelim and Amacher, 1997NANANANANANA
3CecilSelim and Amacher, 19975.4Clay, Kaolinitic2.43018.851.2
4KulaSelim and Amacher, 19975.9Sandy Loam2766.632.90.5
5WebsterSelim and Amacher, 19977.6Clay Loam14.127.548.623.9
6LafitteSelim and Amacher, 19973.9Sandy Loam26.960.721.717.6
7WindsorBuchter et al., 19895.3Sandy Loam2.076.820.52.8
8MolokaiBuchter et al., 19896.0Clay Loam, Kaolinitc1125.746.228.2
9SpodosolBuchter et al., 19894.3Sand2.790.26.03.8
10CalciorthidBuchter et al., 19898.5Sandy Loam14.77019.310.7
11HartsellsBolster and Hornberger, 2007NAFine LoamyNANANANA
12PembrokeBolster and Hornberger, 2007NAFine SiltyNANANANA
13LoringBolster and Hornberger, 2007NAFine SiltyNANANANA


#Reactive metal substanceSoilNumber of data pointsReference

1CuAlligator7Selim and Amacher, 1997
2CuMcLaren5Selim and Amacher, 1997
3CuCecil5Selim and Amacher, 1997
4ZnAlligator10Selim and Amacher, 1997
5ZnKula7Selim and Amacher, 1997
6ZnWebster8Selim and Amacher, 1997
7PbCecil6Selim and Amacher, 1997
8PbSpodosol5Selim and Amacher, 1997
9PbLafitte7Selim and Amacher, 1997
10PbAlligator8Selim and Amacher, 1997
11CrAlligator15Buchter et al., 1989
12CrWindsor12Buchter et al., 1989
13CrKula10Buchter et al., 1989
14CdMolokai13Buchter et al., 1989
15CdKula10Buchter et al., 1989
16CdWindsor14Buchter et al., 1989
17CdSpodosol14Buchter et al., 1989
18CdCalciorthid7Buchter et al., 1989
19CaHartsells6Bolster and Hornberger, 2007
20CaPembroke6Bolster and Hornberger, 2007
21CaLoring6Bolster and Hornberger, 2007

4.2. Parameter Setting

A number of twenty runs were used to test the performance of the proposed solvers. The number of iterations was fixed to 150 for all the algorithms and for all the isotherms. However, the number of agents was varied based on the number of model parameters, that is, size of the dimensional (parametric) space. The algorithms used 10, 20, and 30 agents for fitting the experimental data with Freundlich isotherm (2-dimensional), Freundlich-Langmuir isotherm (3-dimensional), and two-site Langmuir isotherm (4-dimensional), respectively.

The parameter setting for different algorithms was determined empirically on synthetic test data for which the optima were known. The tuning parameters of both the PSO algorithms, such as acceleration coefficient, inertia factor, and constriction coefficient, are dependent on the given problem [1114, 18]. The coefficients ; and provided the best performance for the present problem. The same parameter setting was used for both the PSO algorithms. The perturbation coefficient of in (12) provides good convergence for the algorithms [14, 18]. PCPSO algorithm required additional tuning parameters, namely, catIter and number of catfish particles. The 20% worst-fit particles were considered as the catfish particles and repositioned using OBL in every 10 iterations.

The colony size in ABC and MABC algorithms was twice the number of agents considered for other algorithms. In other words, the number of employed bees and onlooker bees was the same and equal to the number of agents for other algorithms. In these algorithms, the function was evaluated three times in each iteration. Therefore, the number of iterations was fixed to 50 to use the same number of function evaluations for each agent in all the algorithms. The scout limit was equal to the colony size multiplied by number of dimensions of the problem. The parameter, , was varied from 2.0 to 0.5 from the beginning of the iteration to end of the iteration consistently. The acceleration coefficient, , in MABC algorithm was fixed to 3.0 for the best performance based on the synthetic tests. The agents were repositioned randomly on the search space when they move out of the boundaries of the -dimensional parametric space.

The pseudocode of the proposed MABC algorithm was given in the Appendix.

5. Results and Discussion

5.1. Parametric Space

The parametric space for the experimental data of Cr sorption on Alligator soil using nonlinear isotherm equations was computed. The parametric spaces were obtained using two-parameter models of Freundlich and Langmuir isotherms (obtained by substituting in Freundlich-Langmuir isotherm) and are shown in Figures 1 and 2, respectively. RMSE (5) between experimentally measured data and theoretically computed data from 22,50,000 sets of model parameters (, or , ) was used. The parametric spaces appeared fairly different in these two cases although the same experimental data was used. Further, these functional terrains contained unique global optimum solutions, several local optima, and a plan surface having the same RMSE values. Further, the parametric space varied with measurement errors in the data and the number of data points. Complexity of the functional terrains increases further with the parametric dimension. The conventional algorithms may converge to one of these local optima when the initial guess was used in such regions of the parametric space. The application of robust algorithms for determining global optimum solutions for the present problem is, therefore, justified.

5.2. Gradient-Based Algorithm

The performance of gradient-based algorithms is dependent on the user-supplied, initial solution (guess), and the type of error measure. The sensitivity of these algorithms on the initial guess was verified qualitatively for the present problem. The “fmincon” MATLAB® routine, featuring a constrained optimization of a multivariable function using interior-point technique, was used for finding the best isotherm model parameters that provided best-fit theoretical sorption data to the measured data. The interior-point technique was found to have better convergence abilities than trust-region-reflective or any other available techniques. A finite-difference numerical solution was used to determine the first- and second-order derivatives of the objective function with respect to the model parameters. The tolerance value of the objective function was fixed to 1 × 10−8 and a very large value was used for maximum number of iterations which served as the stopping criterion. Different initial guess solutions were used to obtain the converged solutions. The performance of the gradient-based algorithm for fitting two-site Langmuir isotherm on the experimental data of Cd sorption on Molokai soil was presented in Table 3. The solutions converged to several suboptimal and local solutions for different initial guess solutions. The predicted model parameters were fairly different from each other, but the RMSEs were very close to each other. Thus, finding a global-best solution was fairly complex in such situations. It is evident from these results that the performance of the gradient-based algorithm was highly sensitive to the initial guess and the convergence was impeded by the presence of local minima.


Initial guessPredicted solutionError

9200.11723.74.4923.840.11921.17464.8840.071872
8800.14711.57880.290.13019.51525.0540.072161
1000.1107490.2110.320356.7010.0750.089178
1000.1100.5515.7770.308324.2130.0750.089366
1000.11000.5429.7050.350429.9150.0790.088687

5.3. Parameter Estimation by SI Based Solvers

Model parameters of Freundlich, Freundlich-Langmuir, and two-site Langmuir isotherms on the experimental data were predicted using the developed solvers. The performance of the solvers was determined based on the mean fitness value, best-fit solution, and standard deviation (STD) computed from 20 independent runs as the SI techniques are stochastic in nature and may produce different solutions in different runs. The robustness of the algorithms is evaluated by comparing the mean fitness value with the best fitness (i.e., minimum error between theoretical and experimental data) and by determining the standard deviation. The robust solver predicts best-fit solution very close to the mean fitness solution and predicts smallest standard deviation, that is, close to zero. The model parameters obtained by different algorithms for all the 21 experiments using Freundlich, Freundlich-Langmuir, and two-site Langmuir isotherms were given separately in Tables 4(a), 5(a), and 6(a), respectively. The best and worst solutions obtained in 20 independent runs based on the objective function values were provided. The Langmuir isotherm was not used in the comparison as it was the special case of Fredlund-Langmuir isotherm with .

(a)

#ParametersPSOPCPSOABCMABC
BestWorstBestWorstBestWorstBestWorst

1252.6228.5252.6244.2252.6262.2252.6252.6
0.5330.5180.5330.5010.5330.5980.5330.533
0.1450.1510.1450.1480.1450.1580.1450.145

228.40635.428.5122.2128.4129.9528.4128.37
0.7291.910.7290.7960.7290.7060.7290.729
0.1112.4990.1110.1260.1110.1120.1110.111

322.25451222.2514.6222.2526.0622.2522.41
0.5590.0060.5590.7480.5590.500.5590.556
0.0231.7530.0230.1140.0230.0420.0230.023

428.504.4528.5042.8128.5028.7028.5028.50
0.9944.190.9940.9730.9940.9850.9940.994
0.0802.490.0800.1880.0800.0800.0800.080

5231.1226.9231.1201.7231.1231.1231.1231.1
0.7281.6010.7280.7390.7280.7280.7280.728
0.0400.6400.0400.0730.0400.0400.0400.040

6736.8309.8736.8871.9736.8788.9736.8736.8
0.6791.1550.6790.7250.6790.7010.6790.679
0.0971.1910.0970.1110.0970.1000.0970.097

7265.7234.9265.71480265.7265.6265.7265.7
0.6951.4090.6951.4370.6950.6950.6950.695
0.0590.5440.0590.8190.0590.0590.0590.059

8126.664.57126.6126.9126.6126.6126.6126.6
0.7501.6930.7500.7500.7500.7500.7500.750
0.0850.6650.0850.0850.0850.0850.0850.085

9994.71176994.7977.2994.71414994.7994.7
1.6453.5811.6451.6381.6451.8591.6451.645
0.1331.0800.1330.1330.1330.1560.1330.133

1018484872184860631848299318481846
0.8613.1520.8611.2160.8611.0490.8610.860
0.1112.5830.1110.2830.1110.1640.1110.111

113.49337.663.4933.0273.4933.6063.4933.493
0.5010.5150.5010.5410.5010.4600.5010.501
0.0911.0410.0910.1160.0910.1040.0910.091

128.519166.78.5198.4518.5198.5188.5198.519
0.5030.9450.5030.4470.5030.5020.5030.503
0.1201.4000.1200.1340.1200.1200.1200.120

1363.6132.0663.6144.6763.6163.6663.6163.61
0.6052.6800.6050.6390.6050.6050.6050.605
0.0832.1270.0830.1690.0830.0830.0830.083

1486.28832.586.2883.8786.2896.9986.2886.28
0.7441.0900.7440.7310.7440.7670.7440.744
0.1360.9970.1360.1370.1360.1450.1360.136

15194.41365194.4176.9194.4194.4194.4194.4
0.7381.1240.7380.7250.7380.7380.7380.738
0.1080.6810.1080.1130.1080.1080.1080.108

1614.95487314.9518.5114.9531.2014.9514.95
0.7875.3390.7870.7830.7871.3480.7870.787
0.0945.3000.0940.1340.0940.6610.0940.094

175.30164.295.3015.5955.3015.2895.3015.301
0.8351.0630.8350.8590.8350.8350.8350.835
0.0681.0830.0680.0750.0680.0680.0680.068

18407.7979.9407.7459.0407.7483.14407.7407.7
0.6020.9400.6020.6690.6020.6650.6020.602
0.0790.3470.0790.1030.0790.1010.0790.079

1981.6999.2981.6992.0581.6969.3681.6981.69
0.2641.4340.2640.2830.2640.3750.2640.264
0.0511.0520.0510.0800.0510.0980.0510.051

2043.701.42843.7011.5743.7044.2243.7043.73
0.4192.6780.4191.2040.4190.4130.4190.418
0.0291.1210.0290.3790.0290.0290.0290.029

2183.2633.3083.2686.5283.2686.2083.2683.26
0.3121.0100.3120.3050.3120.2940.3120.312
0.0310.4680.0310.0340.0310.0340.0310.031

(b)

#PSO algorithmPCPSO algorithmABC algorithmMABC algorithm
MeanBestStdMeanBestStdMeanBestStdMeanBestStd

10.35550.14490.14520.14490.14570.14490.14490.1449
20.41900.11120.11190.11120.12260.111220.11120.1112
30.30790.02330.03110.023330.02330.02330.02330.0233
40.20450.07980.08190.07980.08270.07980.07980.0798
50.06140.04040.04040.04040.04040.04040.04040.0404
60.11430.09730.09730.09730.09790.09730.09730.0973
70.08090.05890.05890.05890.05890.05890.05890.0589
80.14760.08520.08530.08520.08520.08520.08520.0852
90.15370.13300.13300.13300.13300.13300.13300.1330
100.14650.11110.11190.11110.11130.11110.10100.1010
110.43550.09110.13350.09120.09150.09110.09110.0911
120.28520.11980.14500.11980.11980.11980.11980.1198
130.62640.08260.08810.08250.08260.08260.08250.0825
140.42600.13550.13970.135530.13550.13550.13550.1355
150.18280.10810.11420.10810.10810.10810.10810.1081
160.11550.09440.11850.09440.09440.09440.09440.0944
170.66460.06760.10190.06760.06770.06770.06760.0676
180.18210.07900.08910.07900.08100.07900.07900.0790
190.24000.05090.06580.05090.05310.05090.05090.0509
200.16770.02920.04370.02920.03780.02920.02920.0292
210.42730.03100.04440.03100.05910.03100.01760.0176

(a)

#ParametersPSOPCPSOABCMABC
BestWorstBestWorstBestWorstBestWorst

11174358.21166574.81176887.711671361
0.4035.4030.4082.1950.3990.7300.4070.301
0.7790.9980.7810.9870.7790.9050.7800.725
0.1280.3060.1280.2020.1280.1320.1280.129

2711.9143.3668.0403.2744.1695.6674.6905.3
0.0344.4000.0360.1330.0320.0410.0360.026
0.9880.3290.9720.5210.9860.9041.0000.958
0.0930.4060.0950.2430.0930.1020.0920.096

3645.091.33791.393.471072276.5904.7402.7
0.0317.3480.0294.7050.0210.0800.0230.049
0.6880.4180.6150.5200.6180.7900.6330.758
0.0230.3150.0240.3090.0220.0510.0200.027

4845.882.25126179.963426158658788999
0.0412.7050.0244.2360.0090.0190.0050.003
0.9640.6390.9580.9951.0000.9991.0000.999
0.1520.6740.1310.6670.0780.0960.0760.077

52244283.22334771.32259166424292280
0.1322.4370.1250.5700.1310.2010.1190.129
0.8680.6390.8600.7200.8690.9430.8540.864
0.0310.3990.0310.2170.0310.0350.0310.031

61420380.1146035031424138214201434
1.2805.4701.2010.2421.2731.3561.2811.258
0.9500.5610.9400.6670.9490.9590.9500.947
0.0740.3810.0740.1070.0740.0740.0740.074

71530317.3163935471678125916901780
0.2535.0470.2300.0770.2220.3460.2190.204
0.8710.8390.8560.5010.8530.9270.8510.842
0.0460.3580.0460.1730.0460.0480.0460.046

82458210.71753213.72234164027152260
0.0583.1770.0845.1920.0640.0930.0520.064
0.8510.6860.9090.9930.8560.9400.8410.865
0.0820.4240.0840.4250.0820.0850.0820.082

92288174.34857237.41333338011334213268
0.2235.2570.1056.4900.0360.1360.0360.036
0.9990.4010.9731.0000.9920.9991.0000.995
0.2660.5140.2630.4440.2490.2580.2480.249

102327649.82352752.42340232623282390
1.4021.9491.3896.7031.3991.4011.4021.350
1.0000.6080.9991.0001.0000.9981.0001.000
0.1010.3130.1010.1910.1010.1010.1010.101

1145.418.37940.787.22945.4838.9045.4744.64
0.0982.1720.1156.3330.0980.1210.0980.099
0.6170.6390.6340.9990.6160.6500.6170.615
0.0700.3950.0710.4050.0700.0720.0700.070

1245.9112.6145.8615.7945.9743.9145.9146.06
0.3421.8310.3434.9440.3410.3710.3420.340
0.7630.5280.7640.9940.7620.7750.7630.761
0.0310.3620.0310.2830.0310.0320.0310.031

13748.9131.3758.31480755.0639.9752.7756.5
0.1146.5540.1120.0510.1130.1410.1130.112
0.7951.0000.7950.4670.7930.8330.7940.792
0.0370.4670.0370.1940.0370.0410.0370.037

14951.296.21895.3125.1949.0924.0951.3954.9
0.1509.3070.1666.7320.1500.1570.1500.149
0.9120.7020.9221.0000.9120.9200.9120.911
0.0790.7450.0790.5490.0790.0790.0790.079

15931.6155.3943.3157.1929.1959.3931.4933.6
0.4032.0350.3956.7230.4050.3770.4030.402
0.9180.6470.9161.0000.9190.9110.9180.918
0.0400.4260.0400.3570.0400.0400.0400.040

16275.918.57651.888.24503.8294.5497.5523.3
0.0763.8570.0280.3090.0370.0720.0370.035
0.9160.8960.8690.9550.8700.9170.8690.860
0.0820.5730.0690.2060.0650.0790.0650.065

17380.15.862275.97.135580.9250.3635.9541.8
0.0167.6080.0221.2550.0100.0250.0090.011
0.9050.7010.8960.9790.8660.9360.8650.869
0.0680.7970.0710.5810.0630.0790.0630.063

181288291.01288451.11296111112871277
0.6094.9500.6085.6870.6010.7960.6090.623
0.7660.7460.7661.0000.7650.7970.7660.770
0.0320.2780.0320.1590.0320.0350.0320.032

19181.1121.4181.1127.2181.3177.3181.2180.5
1.0586.7371.0596.1681.0551.1221.0571.069
0.6600.2410.6610.9970.6600.6840.6600.664
0.0240.1790.0240.0990.0240.0240.0240.024

20244.8100.5239.3101.3244.5201.6247.8267.0
0.2055.0560.2125.4640.2050.2550.2020.185
0.6610.7410.6660.4350.6620.7630.6570.632
0.0150.1770.0160.1840.0150.0190.0150.016

21266.2132.2273.1167.2267.6222.6265.7252.1
0.5015.6290.4801.6560.4990.7010.5020.553
0.5400.3740.5290.9350.5390.6410.5400.562
0.0110.1750.0110.0460.0110.0160.0110.011

(b)

#PSO algorithmPCPSO algorithmABC algorithmMABC algorithm
MeanBestStdMeanBestStdMeanBestStdMeanBestStd

10.20080.12820.13330.12830.12880.12820.12820.1282
20.21560.09550.16130.09400.09750.09430.09280.0924
30.11680.02470.06860.02390.03290.02370.02110.0201
40.37480.12520.29160.13860.12140.10630.07960.0771
50.08900.03080.04440.03160.03580.03090.03150.0308
60.08520.07420.08190.07420.07430.07420.07420.0742
70.13030.04570.05090.04580.04770.04600.04590.0457
80.12340.08240.11090.08270.08640.08350.08190.0817
90.31880.25910.29230.26890.26810.26070.25000.2490
100.18000.10100.10700.10100.10140.10100.10110.1010
110.16990.07040.08730.07240.07910.07150.07060.0704
120.13540.03090.05980.03100.03310.03120.03100.0309
130.15140.03670.06740.03690.04100.03730.03670.0367
140.21910.07880.19390.07880.08110.0791