Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2013, Article ID 103748, 8 pages
Research Article

Parameter Identification of Anaerobic Wastewater Treatment Bioprocesses Using Particle Swarm Optimization

Department of Automatic Control, University of Craiova, A.I. Cuza 13, 200585 Craiova, Romania

Received 25 January 2013; Accepted 30 June 2013

Academic Editor: Carlo Cattani

Copyright © 2013 Dorin Sendrescu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


This paper deals with the offline parameters identification for a class of wastewater treatment bioprocesses using particle swarm optimization (PSO) techniques. Particle swarm optimization is a relatively new heuristic method that has produced promising results for solving complex optimization problems. In this paper one uses some variants of the PSO algorithm for parameter estimation of an anaerobic wastewater treatment process that is a complex biotechnological system. The identification scheme is based on a multimodal numerical optimization problem with high dimension. The performances of the method are analyzed by numerical simulations.

1. Introduction

It is well known that the biotechnology is one of the fields that over the last decades have a high development. Therefore, due to its advantages, the control of industrial bioprocesses has been an important practical problem attracting wide attention. Biotechnology applications can be found especially in agriculture, in food industry, in medicine and pharmaceutical processes, in waste treatment processes, and so forth. A frequent and important challenge in control of such living processes is finding an accurate model of the system. The bioprocesses are highly nonlinear, and their kinetic parameters are usually badly or inadequately known [1]. This problem becomes of great importance in complex systems where critical instability of the process must be avoided. Parameters characterizing the internal behavior of biotechnological systems are usually not directly accessible to measurement. Their measurement is usually approached indirectly as a parameter estimation problem [2]. In this paper a dynamic model describing the internal structure of the system is formulated, and an algorithm based on PSO for parameter estimation is designed.

In recent years, a progress has been made in the area of continuous-time system identification [3]. Even if the most physical systems are naturally continuous, a much more attention has been paid to parameter estimation of discrete-time systems, mainly because they are better suited for numerical implementations. Continuous-time identification makes possible a more direct link to the physical properties and operation of the underlying systems and the direct estimation of physical parameters which have a clear significance. The most common approach for parameter estimation of linear or nonlinear systems is the use of prediction-error identification methods (PEM) [4]. In this category falls the well-known least squares methods or the maximum likelihood methods. In this approach, identification consists in minimization of a scalar-valued function of the model parameter. In general, this function cannot be minimized by analytical methods so the solution has to be found by iterative, numerical techniques. There is an extensive literature on such numerical problems. In classical approach the most used procedures are the quasi-Newton methods and interior point algorithms. The main drawback of these nonlinear parameter optimization techniques is that they are often unreliable; for example, they give no guarantee of converging to a true minimum. The increasing computational power of personal computers and microcontrollers allowed the implementation of several optimization algorithms inspired from natural phenomena. Examples of these algorithms include the Simulated Annealing [5], Genetics Algorithms (GA) [6], or Ant Colony Optimization [7] algorithms. Particle Swarm Optimization (PSO) [8] is among these nature inspired algorithms. It is inspired by the ability of birds flocking to find food that they have no previous knowledge of its location. Every member of the swarm is affected by its own experience and its neighbors’ experiences. Although the idea behind PSO is simple and can be implemented by two lines of programming code, the emergent behavior is complex and hard to completely understand [9, 10].

The most important approaches for the yield and kinetic coefficients estimation of biotechnological systems make use of the state transformations based on the general structure [11]. In this paper we propose an identification method based on particle swarm optimization techniques for these classes of biotechnological systems considering that the unknown parameters can appear in rational relations with measured variables. The paper is organized in the following way. The nonlinear dynamical model of an anaerobic wastewater treatment bioprocess is given in Section 2. Section 3 presents the identification algorithm using the particle swarm optimization techniques. Some numerical simulations are presented in Section 4 and conclusions in Section 5.

2. Nonlinear Dynamical Model of Anaerobic Wastewater Treatment Bioprocesses

A process that takes place in a bioreactor can be described as a set of biochemical reactions involving components (with ). The global dynamics can be represented by the following dynamical state-space model [1]: where .

This model describes the behavior of an entire class of biotechnological processes and is referred to as the general dynamical state-space model of this class of bioprocesses [9]. In (1), the term is the rate of consumption and/or production of the components in the reactor; that is, the reaction kinetics and the term represents the exchange with the environment. The strongly nonlinear character of the model (1) is given by the reaction kinetics. In many situations, the yield coefficients, the structure, and the parameters of the reaction rates are partially known or unknown. Many of the evolved control methods for these kinds of systems—like model predictive control and robust or adaptive control—are based on good initial estimates of the yield and kinetic parameters.

Anaerobic digestion is a multistage biological wastewater treatment process whereby bacteria, in the absence of oxygen, decompose organic matter to carbon dioxide CO2, methane CH4, and water [12]. Four metabolic paths can be identified in this process: two for acidogenesis and two for methanation (see Figure 1). In the first acidogenic path, glucose (or another complex substrate) from the wastewater is decomposed into volatile fatty acids (acetates, propionic acid), hydrogen H2, and inorganic carbon by acidogenic bacteria.

Figure 1: A schematic view of a typical anaerobic digestion process.

In the second acidogenic path, Obligate Hydrogen Producing Acetogens (OHPA) decompose propionate into acetate, H2, and inorganic carbon. In the first methanation path, acetate is transformed into CH4 and CO2 by acetoclastic methanogenic bacteria, while, in the second methanation path, H2 combines inorganic carbon to produce CH4 under the action of hydrogenophilic methanogenic bacteria. The reaction scheme of this complex bioprocess involves 4 reactions and 10 components.

Since the anaerobic digestion is a complex bioprocess, this dynamical model being described by ten differential equations, for control purpose an appropriately reduced order model can be used. Using the singular perturbation method, the following reduced order dynamical model can be obtained: where , are acidogenic and acetoclastic methanogenic bacteria, respectively, and , , are glucose, acetate and inorganic carbon, respectively, and is methane; , are the rates of first acidogenic reaction and methanation reaction respectively, with and represent gaseous outflow rates of CH4 and CO2, respectively, is the influent substrate concentration, the dilution rate, and are yield coefficients. Each reaction rate is a growth rate and may be written as , where , is the specific growth rate of reaction . Defining the state vector as , the model (2)–(7) can be written in matrix form as or in compact form as where is the vector of inflow rates, is the vector of gaseous outflow rates, is the vector of reaction rates, which can be written as , with being a diagonal matrix whose entries are products of the component concentrations involved in each reaction and the vector of specific reaction rates, and is the yield coefficient’s matrix. The matrices and have the following structure:

The most difficult task for the construction of the dynamical model is the modeling of the reaction kinetics [13]. The form of kinetics is complex, nonlinear, and in many cases unknown. In our study one considers that reaction rates are given by the Monod law and the Haldane kinetic model where are Michaelis-Menten constants, represent specific growth rate coefficients, and is the inhibition constant.

For simplicity, we will denote the unknown plant parameters by the vector where

3. Parameter Estimation Using PSO

At the beginning of parameter estimation, the input and output data are known and the real system parameters are assumed as unknown. The identification problem is formulated in terms of an optimization problem in which the error between an actual physical measured response of the system and the simulated response of a parameterized model is minimized. The estimation of the system parameters is achieved as a result of minimizing the error function by the PSO algorithm.

3.1. Problem Statement

Consider the following n-dimensional nonlinear system: where is the state vector, is the unknown parameters vector, and is a given nonlinear vector function.

To estimate the unknown parameters in (15), a parameter identification system is defined as follows: where is the estimated state vector and is the estimated parameters vector.

The objective function defined as the mean squared errors between real and estimated responses for a number of given samples is considered as fitness of estimated model parameters: where is the number of measurable states, is the data length used for parameter identification, whereas and are the real and estimated values of state at time , respectively.

This objective function is a difficult function to minimize because there are many local minima, and the global minimum has a very narrow domain of attraction. Our goal is to determine the system parameters, using particle swarm optimization algorithms in such a way that the value of is minimized, approaching zero as much as possible.

3.2. Overview of Basic PSO Algorithms

During the last decade, PSO algorithms have gained much attention and wide applications in different fields due to their effectiveness in performing difficult optimization issues, as well as simplicity of implementation and ability to fast converge to a reasonably good solution [1416]. PSO is a population-based heuristic global optimization technique, first introduced by Kennedy and Eberhart [8] and referred to as a swarm-intelligence technique. It is motivated from the simulation of social behavior of animals such as bird flocking, fish schooling, and swarm. In this algorithm, the population is called a swarm, and the trajectory of each particle in the search space is controlled through the medium of a term called “velocity,” according to its own flying experience and swarm experience in the search space. Mathematical description of basic PSO and some important variants is presented in the following.

Candidate solutions of a population called particles coexist and evolve simultaneously based on knowledge sharing with neighboring particles. Each particle represents a potential solution to the optimization problem, and it has a fitness value decided by optimal function. Supposing that search space is -dimensional, each individual is treated as a particle in the -dimensional search space. The position and rate of position change for th particle can be represented by -dimensional vector, and , respectively. The best position previously visited by the th particle is recorded and represented as , called pbest. The swarm best position previously visited by all the particles in the population is represented as , called gbest. Then particles search their best position, which are guided by swarm information and their own information . Each particle modifies its velocity to find a better solution (position) by applying its own flying experience (i.e., memory of the best position found in earlier flights) and the experience of neighboring particles (i.e., the best solution found by the population). Each particle position is evaluated by using fitness function and updates its position and velocity according to the following equations: where is iteration number, is inertia weight, and are two acceleration coefficients regulating the relative velocity toward local and global best positions, and and are two random numbers from interval . Many effects have been made over the last decade to determinate the inertia weight. Various studies has shown that under certain conditions convergence is guaranteed to a stable equilibrium point. These conditions include and . The technique originally proposed was to bound velocities so that each component of is kept within the range [].

Unfortunately, this simple form of PSO suffers from the premature convergence problem, which is particularly true in complex problems since the interacted information among particles in PSO is too simple to encourage a global search. Many efforts have been made to avoid the premature convergence. One solution is the use of a constriction factor to insure convergence of the PSO, introduced in [17]. Thus, the expression for velocity has been modified as where represents the constriction factor and is defined as

In this variant of the PSO algorithm, controls the magnitude of the particle velocity and can be seen as a dampening factor. It provides the algorithm with two important features. First, it usually leads to faster convergence than standard PSO. Second, the swarm maintains the ability to perform wide movements in the search space, even if convergence is already advanced, but a new optimum is found. Therefore, the constriction PSO has the potential to avoid being trapped in local optima while possessing a fast convergence. It was shown to have superior performance compared to a standard PSO.

It is shown that a larger inertia weight tends to facilitate the global exploration and a smaller inertia weight achieves the local exploration to fine-tune the current search area [18]. The best performance could be obtained by initially setting to some relatively high value (e.g., 0.9), which corresponds to a system where particles perform extensive exploration, and gradually reducing to a much lower value (e.g., 0.4), where the system would be more dissipative and exploitative and would be better at homing into local optima. In [19], a linearly decreased inertia weight over time is proposed, where is given by the following equation: where are starting and final values of inertia weight, respectively; is the maximum number of the iteration, and is the current iteration number. It is generally taken that starting value and final value [20].

On the other hand, in [21] was introduced PSO with time-varying acceleration coefficients. The improvement has the same motivation and the similar techniques as the adaptation of inertia weight. In this case, the cognitive coefficient is decreased linearly and the social coefficient is increased linearly over time as follows: where and are the initial values of the acceleration coefficients and ; and are the final values of the acceleration coefficients and , respectively. Usually, , , , and .

3.3. Identification Algorithm

Considering all the states of the nonlinear system (8) at the sampling moments   ( = sampling period) known, the identification algorithm has the following steps.

Step 1. Initialize a population of particles with random positions and velocities on dimensions in search space.

Step 2. For each particle, evaluate the desired optimization fitness function (17) in variables.

Step 3. Compare particle’s fitness evaluation with its pbest. If current value is better than pbest, then set pbest equal to the current value in -dimensional space.

Step 4. Identify the particle in swarm with the best success so far, and assign its index to the variable gbest.

Step 5. Change the velocity and position of the particle according to (19).

Step 6. if a criterion is met (usually a sufficiently good fitness or a maximum number of iterations)
then Stop;
go to Step 2.

3.4. “Classical” Identification Procedure

The “classical” approach for identification of yield coefficients is a two-step procedure under the assumption that full state measurements are available [1]. This method is based on a state transformation that allows reformulating the dynamical model into separate submodels. The first submodel depends only on the reaction structure and is independent of the kinetics. It can be linearly reparametrized and used for the identification of the yield coefficients by means of linear regressions, provided suitable identifiability conditions are satisfied. We present briefly this method for estimating the yield coefficients, and we use it for comparison in the next section.

The general dynamical model given in (2) represents a particular class of nonlinear state-space models. The nonlinearity lies in the reaction rates that are (nonlinear) functions of the state variables. These functions enter the model in the form (where is a constant matrix), which is a set of linear combinations of the same nonlinear functions . This particular feature can be exploited to separate the nonlinear part from the linear part of the model by an adequate linear state transformation. More precisely, one chooses a nonsingular partition with of full row rank matrix (i.e., ), , and a permutation matrix. The induced partitions of the vectors and are Model (2) is then partitioned into two submodels Then with the state transformation one transforms the initial model into where the matrix is the unique solution of That is, where is a generalized inverse or pseudoinverse of . The subsystem (29) can be augmented with an equation derived from (8) as follows:

It can be considered as a linear time-varying (if varies in the course of time) model with state , input (, , and ), and output . It is nonlinearly parametrized by the yield coefficients but linearly reparametrized by the nonzero entries of . When data of the signals , , , and are available, the auxiliary model (31) can be used to identify the yield coefficients independently of the knowledge of the reaction rates. The model (31) can be used to perform the identification of the nonzero entries of by a linear regression technique, with the yield coefficients recovered afterwards from (29).

For the estimation of kinetic parameters the main approach is the use of a parameter observers (high gain observers, regressive parameter estimator, sliding mode observers, etc.). All these techniques have numerous tuning parameters and are difficult to implement.

4. Simulation Results and Discussion

The efficacy of our approach is shown by numerical simulations on an interval of 30 hours. The model given by relation (9) was integrated using a fourth-order Runge-Kutta routine with a sampling period of 1 minute and with initial conditions: .

The influence of sampling period and type of the optimization algorithm and of noisy measurements are analyzed. To compare statistical performances of the different approaches the empirical normalized mean square error (NMSE) was used, that is defined as with , where is the number of estimated parameters, is the th element of the estimated parameter vector while the “*” superscript denotes the true value of the parameter.

In order to study the sensitivity of the estimation method to the sampling period and to the type of PSO algorithm, and to the noise, the following parameters were used.

Sampling period: .

Types of the optimization algorithm are as follows: PSO_1: algorithm based on relation (19) with defined by relation (20), and ; PSO_2: algorithm based on relation (18) with , and defined by relations (22) and (23) and ; PSO_3: algorithm based on relation (18) with constants: ,  and . Noise:{zero-mean white: SNR = 50 dB, 40 dB, 30 dB}.

The results of these simulations are presented in Tables 1, 2, and 3. The simulations were performed with a number of particles between 50 and 120. All the presented results are obtained for a number of 80 particles. For a greater number of particles the accuracy of the estimates was not better. In Figure 2 is presented the convergence rate for PSO_1 algorithm with a sampling period of 1 min.

Table 1: Influence of the sampling period.
Table 2: Influence of the algorithm type.
Table 3: Influence of the noise level.
Figure 2: Convergence rate for PSO_1 algorithm.

The results presented in Tables 13 suggest that our proposed parameter estimation technique yields consistent results. The method is very simple, easily completed and needs fewer parameters, which made it fully developed. However, the research on the PSO is still at the beginning, and a lot of problems are to be resolved. Research on the topology of the new pattern particle swarm which has a better function can be carried out. The neighbouring topology of the different particle swarms is based on the imitation of the different societies. It is meaningful to the use and spread of the algorithm to select the proper topology to enable PSO have the best property and do the research on the suitable ranges of different topologies. Blending PSO with the other intelligent optimization algorithms means combining the advantages of the PSO with the advantages of the other intelligent optimization algorithms to create the compound algorithm that has practical value. For example, the particle swarm optimization algorithm can be improved by the Simulated Annealing approach. Rapid swarm convergence is one of the main advantages of PSO, but this can also be problematic since, if an early solution is suboptimal, the swarm can easily stagnate around it without any pressure to continue exploration. Overall the results indicate that PSO algorithms can be used in the optimization of parameters during model identification.

5. Conclusions

The paper presents a particle swarm optimization based identification procedure for offline estimation of yield and kinetic coefficients in an anaerobic wastewater treatment bioprocess. The identification scheme is formulated in terms of an optimization problem where the error between an actual physical measured response of the system and the simulated response of a parameterized model is minimized. This function is multimodal, and classical iterative methods fail to find the global optimum. The estimation of the system parameters is achieved as a result of minimizing the error function by the PSO algorithm.

The simulations evaluated the effects of sampling period and some basic variants of PSO algorithm and of the noisy measurements. The proposed strategy can still converge to accurate results even in the presence of measurement noise, as illustrated by the numerical study. The PSO algorithm has a simpler procedure and higher computational efficiency than other optimization techniques.


:Acceleration coefficients
:Dilution rate (h−1)
:Vector of feeding rates
:Input feed rate (Lh−1)
:Matrix of yield coefficients
:Yield coefficients
:Vector of rates of removal of the components in gaseous form
:Glucose concentration (g/L)
:Acetate concentration (g/L)
:Inorganic carbon concentration (g/L)
:Acidogenic concentration (g/L)
:Acetoclastic methanogenic concentration (g/L)
:Methane concentration (g/L)
:Glucose concentration on the feed (g/L)
:Vector of reaction rates (reaction kinetics)
:Maximal specific growth rates (h−1)
:State vector
:The set of real numbers
:Inertia weight
:The vector of unknown parameters
:The unknown parameters.


This work was supported by CNCSIS-UEFISCSU, Project no. PN II-RU TE 106, 9/04.08.2010.


  1. G. Bastin and D. Dochain, On-Line Estimation and Adaptive Control of Bioreactors, Elsevier, New York, NY, USA, 1990.
  2. L. Chen, Modelling, identifiability and control of complex biotechnological systems [Ph.D. thesis], Université Catholique de Louvain, Louvain-la-Neuve, Belgium, 1992.
  3. L. M. Li and S. A. Billings, “Continuous time non-linear system identification in the frequency domain,” International Journal of Control, vol. 74, no. 11, pp. 1052–1061, 2001. View at Publisher · View at Google Scholar · View at MathSciNet
  4. L. Ljung, System Identification—Theory for the User, Prentice-Hall, Upper Saddle River, NJ, USA, 2nd edition, 1999.
  5. S. Kirkpatrick, C. D. Gelatt, Jr., and M. P. Vecchi, “Optimization by simulated annealing,” Science, vol. 220, no. 4598, pp. 671–680, 1983. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  6. J. M. Holland, Adaptation in Natural and Artificial Systems, University of Michigan Press, Ann Arbor, Mich, USA, 1975.
  7. A. Colorni, M. Dorigo, and V. Maniezzo, “Distributed optimization by ant colonies,” in Proceedings of the European Conference on Artificial Life (ECAL '91), Elsevier, 1991.
  8. J. Kennedy and R. C. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, pp. 1942–1948, 1995.
  9. J. Kennedy, “Particle swarms: optimization based on sociocognition,” in Recent Developments in Biologically Inspired Computing, The Idea Group, 2004. View at Google Scholar
  10. H. Li, H. Zhong, Z. Yan, and X. Zhang, “Particle swarm optimization algorithm coupled with finite element limit equilibrium method for geotechnical practices,” Mathematical Problems in Engineering, vol. 2012, Article ID 498690, 14 pages, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  11. O. Bernard and G. Bastin, “Identification of reaction networks for bioprocesses: determination of a partially unknown pseudo-stoichiometric matrix,” Bioprocess and Biosystems Engineering, vol. 27, no. 5, pp. 293–301, 2004. View at Google Scholar
  12. D. Selişteanu, E. Petre, and V. B. Răsvan, “Sliding mode and adaptive sliding-mode control of a class of nonlinear bioprocesses,” International Journal of Adaptive Control and Signal Processing, vol. 21, no. 8-9, pp. 795–822, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  13. S. F. Azevedo, P. Ascencio, and D. Sbarbaro, “An adaptive fuzzy hybrid state observer for bioprocesses,” IEEE Transactions on Fuzzy Systems, vol. 12, pp. 641–651, 2004. View at Google Scholar
  14. I. Hassanzadeh and S. Mobayen, “Controller design for rotary inverted pendulum system using evolutionary algorithms,” Mathematical Problems in Engineering, vol. 2011, Article ID 572424, 17 pages, 2011. View at Publisher · View at Google Scholar
  15. T.-S. Zhan and C.-C. Kao, “Modified PSO method for robust control of 3RPS parallel manipulators,” Mathematical Problems in Engineering, vol. 2010, Article ID 302430, 25 pages, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  16. P. Umapathy, C. Venkataseshaiah, and M. S. Arumugam, “Particle swarm optimization with various inertia weight variants for optimal power flow solution,” Discrete Dynamics in Nature and Society, vol. 2010, Article ID 462145, 15 pages, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  17. M. Clerc and J. Kennedy, “The particle swarm: explosion, stability, and convergence in a multi-dimensional complex space,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 1, pp. 58–73, 2002. View at Google Scholar
  18. X. Qian, M. Cao, Z. Su, and J. Chen, “A hybrid particle swarm optimization (PSO)-simplex algorithm for damage identification of delaminated beams,” Mathematical Problems in Engineering, vol. 2012, Article ID 607418, 11 pages, 2012. View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  19. R. C. Eberhart and Y. Shi, “Comparing inertia weights and constriction factors in Particle Swarm Optimization,” in Proceedings of the Congress on Evolutionary Computating, pp. 84–88, July 2000.
  20. Y. Shi and R. C. Eberhart, “Empirical study of particle swarm optimization,” in Proceedings of the IEEE Congress on Evolutionary Computation, pp. 1945–1950, IEEE Press, 1999.
  21. A. Ratnaweera, S. K. Halgamuge, and H. C. Watson, “Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 240–255, 2004. View at Publisher · View at Google Scholar · View at Scopus