Abstract

The main aim of this work is to show that such a powerful optimizing tool like evolutionary algorithms (EAs) can be in reality used for the simulation and optimization of a nonlinear system. A nonlinear mathematical model is required to describe the dynamic behaviour of batch process; this justifies the use of evolutionary method of the EAs to deal with this process. Four algorithms from the field of artificial intelligent—differential evolution (DE), self-organizing migrating algorithm (SOMA), genetic algorithm (GA), and simulated annealing (SA)—are used in this investigation. The results show that EAs are used successfully in the process optimization.

1. Introduction

Evolutionary computation (EC) techniques, which are based on a powerful principle of evolution: survival of the fittest, constitute an interesting category of heuristic search. Evolutionary computation techniques are stochastic algorithms whose search methods model some natural phenomena: genetic inheritance and Darwinian strive for survival. The best known algorithms in this class include genetic algorithms, evolutionary programming, evolution strategies, and genetic programming. There are also many hybrid systems which incorporate various features of the above paradigms and consequently are hard to classify; anyway, we refer to them just as evolutionary computation methods [1].

In computer science, evolutionary computation is a subfield of artificial intelligence that involves combinatorial optimization problems.

Nowadays, Optimization is one of these words which is used almost every day in different fields of human activities. Everybody wants to maximize profit and minimize cost. This means optimizing in every task of industry, transportation, medicine, everywhere. For these purposes, we need to have suitable tools which are able to solve very difficult and complicated problems. As previous years proved, use of artificial intelligence and soft computing contribute to improvements in a lot of activities. One of such tools of soft computing are evolutionary algorithms [2].

In this paper, the modeling of a dynamic chemical engineering process is presented in a highly understandable way using a unique combination of the simplified fundamental theory and direct hands-on computer simulation. A nonlinear mathematical model is required to describe the dynamic behaviour of batch process; this justifies the use of evolutionary method of the EAs to deal with this process, for static optimization of a chemical batch reactor. Consequently, it is used to design geometry technique equipments for chemical reaction. The method was used to optimize the design of the growth chamber and was found to be in good agreement with the observed growth rate results.

Here, EAs were used to investigative and optimize batch reactor to improve its parameters. Consequently, EAs are used to model the technical requirements for chemical reaction. The optimized reactor was used in a simulation with optimization by evolutionary algorithms, and the results are presented in graphs.

Research on this work is concerned with the field of optimization of chemical engineering through EAs. The main purposes and goals of the research can be summarized as thus:(i)description and analysis of the chosen dynamic system more concretely those in the processes of a batch reactor, (ii)proposing a set of solving algorithms for the application of stochastic optimization, which enhances confidence in the optimization results, particularly in the chemical reaction, (iii)selecting and demonstrating EAs and practical method to optimize the chemical process, especially of batch reactor, (iv)demonstrating the use of designed algorithms for global optimization of the chemical process and comparison between each selected algorithm, and (v)presenting conclusions and suggesting further research perspective.

2. Description of a Reactor

This work uses a mathematical model of a reactor shown in Figure 1. The vessel with a double side for cooling medium is further equipped with stirrer for mixing reaction mixtures.

Reactor includes two physical inputs. First input denoted “input chemical FK” that substances into reaction with mass flow rate ̇𝑚FK, temperature 𝑇FK, and specific heat 𝑐FK. Second input denoted “input cooling medium” that water drain into the reactor double side with mass flow rate ̇𝑚𝑉, temperature 𝑇VP, and specific heat 𝑐𝑉. This coolant further traverses among-jacketed through space of reaction, and its total weight in this space is 𝑚VR. Coolant after that gets off the exit reaction denoted “output cooling medium” about mass flow rate ̇𝑚𝑉, temperature 𝑇𝑉, and specific heat 𝑐𝑉. At the beginning of the process, there is an initial batch inside the reactor with parameter mass 𝑚𝑃. Reaction mixture then has total mass 𝑚, temperature 𝑇, and specific heat 𝑐𝑅 and stirs till the time chemicals FK described by parameter concentration 𝑎FK.

2.1. Nonlinear Model of Reactor

Description of the reactor applies a system of four balance equations. The first one expresses a mass balance of reaction mixture inside the reactor, the second a mass balance of the chemical FK, and the last two formulate enthalpic balances, namely, balances of reaction mixture and cooling medium. Equation (1), which for simplified notation of basic equations (2), is represented by term “𝑘” as follows:̇𝑚FK=𝑚[𝑡],(1)̇𝑚FK[𝑡]𝑎=𝑚FK[𝑡][𝑡]𝑎+𝑘𝑚FK[𝑡],̇𝑚FK𝑐FK𝑇FK+Δ𝐻𝑟[𝑡]𝑎𝑘𝑚FK[𝑡]𝑇[𝑡]=𝐾𝑆𝑇𝑉[𝑡][𝑡]𝑐+𝑚𝑅𝑇[𝑡],̇𝑚𝑉𝑐𝑉𝑇VP𝑇[𝑡]+𝐾𝑆𝑇𝑉[𝑡]=̇𝑚𝑉𝑐𝑉𝑇𝑉[𝑡]+𝑚VR𝑐𝑉𝑇𝑉[𝑡],𝑘=A𝑒E/𝑅𝑇[𝑡].(2) After modification into the standard form, the balance equations are obtained in the form (3):𝑚[𝑡]=̇𝑚FK,𝑎FK[𝑡]=̇𝑚FK𝑚[𝑡]A𝑒E/R𝑇[𝑡]𝑎FK[𝑡],𝑇[𝑡]=̇𝑚FK𝑐FK𝑇FK𝑚[𝑡]𝑐𝑅+A𝑒E/R𝑇[𝑡]Δ𝐻𝑟𝑎𝐴[𝑡]𝑐𝑅[𝑡]𝐾𝑆𝑇𝑚[𝑡]𝑐𝑅+𝐾𝑆𝑇𝑉[𝑡]𝑚[𝑡]𝑐𝑅,𝑇𝑉[𝑡]=̇𝑚𝑉𝑇VP𝑚VR+[𝑡]𝐾𝑆𝑇𝑚VR𝑐𝑉𝐾𝑆𝑇𝑉[𝑡]𝑚VR𝑐𝑉̇𝑚𝑉𝑇𝑉[𝑡]𝑚VR.(3) The design of the reactor was based on standard chemical-technological methods and gives a proposal of reactor physical dimensions and parameters of chemical substances. These values are called in this participation expert parameters. The objective of this part of the work is to perform a simulation and optimization of the given reactor.

Therefore, into system equations (3) were instated constants:

𝐴 = 219,588 s−1, 𝐸 = 29967,5087 J·mol−1, 𝑅 = 8,314 J·mol−1·K−1, 𝑐FK = 4400 J·kg·K−1, 𝑐𝑉 = 4118 J·kg·K−1, 𝑐𝑅 = 4500 J·kg·K−1, Δ𝐻𝑟 = 1392350 J·kg−1, K = 200 kg·s−3·K−1.

Next parameters, which are important for calculations, are(i)geometric dimension of the reaction: 𝑟[𝑚], [𝑚],(ii)density of chemicals: 𝜌𝑃 = 1203 kg·m−3, 𝜌FK = 1050 kg·m−3,(iii)stoichiometric rate chemical: 𝑚𝑃 = 2,82236𝑚FK.

2.2. Mathematical Problems

This optimization was found by optimized parameters with one another linked, which includes heat transfer surface, volume, and mass mixtures of reaction. Heat transfer surface 𝑆 has the relation: 𝑆=2𝜋𝑟+𝜋𝑟2,(4) where 𝑟 is radius and is height of the space reactor (see Figure 1).

Volume of vessel of rector applies to the relation: 𝑉=𝜋𝑟2.(5) Total mass of mixtures in the reaction mass 𝑚𝑃 and mass input chemical FK 𝑚FK, is: 𝑚=𝑚𝑃+𝑚FK.(6) The stechiometric ratio is given by 𝑚𝑃=2,82236𝑚FK.(7) Total volume of mixtures in the reaction equals sum of volume initial mixtures in the reaction and volume of FK: 𝑉=𝑉𝑃+𝑉FK=𝑚𝑃𝜌𝑃+𝑚FK𝜌FK.(8) The relationship between the optimized volume of reactor and the mass of added chemical FK is given by (8). Then substituting to (7) gives the mass of the initial batch in the reactor: 𝑚FK=𝜌𝑃𝜌FK𝑉2,82236𝜌FK+𝜌𝑃.(9) In this example, the optimization was then added parameter thickness 𝑑 of vessel, which has relation that is 𝑚VR=𝜌𝑉𝑆𝑑.(10) To minimize the area arising as a difference between the required and real-temperature profile of the reaction mixture in a selected time interval, which was the duration of a batch cycle? The required temperature is 97°C (370.15 K). The cost function that is minimized gives: 𝑓cos𝑡=𝑡𝑡=0||[𝑡]||.𝑤𝑇(11)

3. Methods and Evolutionary Algorithms

3.1. Introduction and Using Evolutionary Algorithms

As the history of the field suggests, there are many different variants of evolutionary algorithms. The common underlying idea behind all these techniques is the same: given a population of individuals, the environmental pressure causes natural selection (survival of the fittest) and this causes a rise in the fitness of the population. Given a quality function to be maximized, we can randomly create a set of candidate solutions, that is, elements of the function’s domain, and apply the quality function as an abstract fitness measure—the higher the better. Based on this fitness, some of the better candidates are chosen to seed the next generation by applying recombination and/or mutation to them. Recombination is an operator applied to two or more selected candidates (the so-called parents) and results in one or more new candidates (the children). Mutation is applied to one candidate and results in one new candidate. Executing recombination and mutation leads to a set of new candidates (the offspring) that compete—based on their fitness (and possibly age)—with the old ones for a place in the next generation. This process can be iterated until a candidate with sufficient quality (a solution) is found or a previously set computational limit is reached. In this process there are two fundamental forces that form the basis of evolutionary systems.

Variation operators (recombination and mutation) create the necessary diversity, and thereby, facilitate novelty, while Selection acts as a force pushing quality.

The combined application of variation and selection generally leads to improving fitness values in consecutive populations. It is easy (although some-what misleading) to see such a process as if the evolution is optimising, or at least “approximising,” by approaching optimal values closer and closer over its course. Alternatively, evolution is often seen as a process of adaptation. From this perspective, the fitness is not seen as an objective function to be optimised, but as an expression of environmental requirements. Matching these requirements more closely implies an increased viability, reflected in a higher number of offspring. The evolutionary process makes the population adapt to the environment better and better.

Let us note that many components of such an evolutionary process are stochastic. During selection fitter individuals have a higher chance to be selected than less fit ones, but typically even the weak individuals have a chance to become a parent or to survive. For recombination of individuals the choice of which pieces will be recombined is random. Similarly, for mutation, the pieces that will be mutated within a candidate solution, and the new pieces replacing them, are chosen randomly. The general scheme of an evolutionary algorithm is given in Pseudocode 1 in a pseudocode fashion [3, 4].

BEGIN
 INITIALISE population with random candidate solutions;
 EVALUATE each candidate;
 REPEAT UNTIL (TERMINATION CONDITION is satisfied) DO
  1SELECT parents;
  2RECOMBINE pairs of parents;
  3MUTATE the resulting offspring;
  4EVALUATE new candidates;
  5SELECT individuals for the next generation;
 OD
END

Evolutionary algorithms are a group of algorithms which use their special operators as mutation, crossover, and others to find an ideal solution. Possible candidates are defined by a cost function in which arguments are values of each solution. The best one is in the global extreme—maximum or minimum [2].

These evolutionary algorithms have been known for decades and live through the advancement from the weaker ones to more robust ones which are used with success in a lot of tasks nowadays. Since their first appearance, there is quite long queue of representatives: genetic algorithms, differential evolution, self-organizing migrating algorithm, particle swarm intelligence, ant Colony optimization, and artificial immune system. In optimization, algorithms belong also to some stochastic and deterministic ones: hill climbing, simulated annealing, Monte Carlo, and a lot of others or their mutations. These techniques promise fast optimization compared to classical mathematical approach. On the other hand, also between these optimization techniques it is possible to find better and worse. Their behaviour was described in a lot of references. And the research in this area is still full of white places. There is wide field of possible applications as tuning of parameters, making of comparisons, trying to find new ones somehow [5].

Optimization algorithms—mainly evolutionary algorithms—are a necessary part of the above-described tools and can be used independently. Here, an overview only of algorithms, which were used in further simulations, will be given.

Division of optimization algorithms might be as follows (Figure 2). This is not the only one point of view on that [2].

3.2. A Brief Survey of Scoping and Screening Chemical Reaction Networks Using Stochastic Optimization

Many methods were adapted for the so-called optimal chemical reactor. From the research article of Marcoulaki and Kokossis [6] introduced about a brief survey of scoping and screening chemical reaction networks using stochastic optimization. The new methods focus on a systematic and thorough consideration of the available options and employ technology in the form of superstructures, optimization techniques, and a variety of graphical methods.

The importance of mathematical methods in optimizing reactor has been exemplified early enough with the application of dynamic programming for the estimation of optimal operating conditions in CSTR cascades [7] and the development of graphical techniques for single reversible reactions in PFRs (1961).

Around the same time, a set of brilliant contributions by Horn [8] provided the basis of material that later emerged as attainable-region (AR) approaches. Dyson and Horn [9] developed graphical tools for optimal temperature control schemes, feed distribution profiles along a PFR, and catalyst minimization problems [10]. In these early days, separate groups made attempts to consolidate options and alternatives within comprehensive reactor structures [1113]. Optimization approaches initially addressed fixed reactor structures. Examples include the work of Paynter and Haskins [14] and Chitra and Govind [1517]. The first studies of comprehensive structures should be attributed to Achenie and Biegler [1820], who employed existing representations [11, 12] to launch optimization techniques in the form of NLP methods.

Kokossis and Floudas [2123] first introduced the idea of a reactor network superstructure modeled and optimized as an MINLP formulation. Though general and inclusive, their representation did not follow previous developments, but made an effort to facilitate the functionalities of the MINLP technology with the synthesis objectives. Mainly to scope, optimize, and analyze the reaction process, Kokossis and Floudas replaced detailed models with simple though generic structures, enough to screen for design options and estimate the limiting performance of the reaction system. In the same vane, dynamic components have been replaced by CSTR cascades. A superstructure of generic elements (ideal CSTRs and PFRs) was postulated to account for all possible interconnections among the units. The representation was modeled and optimized as a MINLP model.

Though fundamental limitations appear evident, persistent efforts to extend the graphical methods have appeared in the literature [2430].

A more promising direction has been pursued by Biegler and coworkers. The motivation has been to instill better guarantees in the optimization efforts by exploiting ideas and rules established in the construction of the AR. Applications presented in this area include the work by Balakrishna and Biegler [31, 32] and Lakshmanan and Biegler [3335] and involved mathematical programming applications in the form of NLP and MINLP formulations. Optimal control formulation has been presented by Rojnuckarin et al. [36] and Schweiger and Floudas [37]. Hildebrandt and Biegler [38] presented a review of the attainable region approaches and suggested areas for future development of the concept.

Especially in recent years, the methods of artificial intelligence, namely, the evolutionary algorithms, were used to optimise successfully chemical processes. Evolutionary algorithms have been applied to the solution of NLP in many engineering applications. The best-known algorithms in this class include genetic algorithms (GA), evolutionary programming (EP), evolution strategies (ES), and genetic programming (GP). There are many hybrid systems, which incorporate various features of the above paradigms and consequently are hard to classify, which can be referred just as EC methods, Dasgupta and Michalewicz [39]. They differ from the conventional algorithms since, in general, only the information regarding the objective function is required.

In recent years, EC methods have been applied to a broad range of activities in process system engineering including modeling, optimization, and control. See, for example, real-time control of plasma reactor [4042], optimization of reactive distillation processes using self-organizing migrating algorithm and differential evolution strategies [43], using method of artificial intelligence to optimise and control chemical reactor [44], investigation on optimization of process parameters and chemical reactor geometry by evolutionary algorithms [44], or an optimum solution for a process control problem (continuous stirred tank reactor) using a hybrid neural network [45].

4. Select Evolutionary Algorithms

For the experiments described here, stochastic optimisation algorithms, such as differential evolution (DE) [46], self-organizing migrating algorithm (SOMA) [47, 48], genetic algorithms (GA) [49], and simulated annealing (SA) [50, 51] were selected. Main reason why DE, SOMA, GA, and SA have been seed comes from contemporary state in chemical engineering and EAs use. Since now has been done some research with attention on use of EAs in chemical engineering optimization, including DE. This participation has to show that applicability of relatively new algorithms is also positive and can lead to applicable results, as was shown, for example, in Zelinka [2], which has been done under the 5th EU project RESTORM (acronym of Radically Environmentally Sustainable Tannery Operation by Resource Management), and main aim was to use EAs in chemical engineering processes. True is also that there is a plenty of other heuristic like particle swarm [52], scatter search [53], memetic algorithms, simulated annealing [50], and according to no free lunch theorem [54] is clear that each heuristic would be less or more applicable on example presented here. SOMA is a stochastic optimization algorithm that is modelled on the social behaviour of cooperating individuals [47, 48]. It was chosen because it has been proved that the algorithm has the ability to converge towards the global optimum [47, 48]. GA is one of the most modern paradigms for general problem solving. Genetic algorithms are more robust than existing directed search methods. Another important property of GA-based search methods is that they maintain population of potential solutions—all other methods process a single point of the search space like hill climbing method. Hill climbing methods provide local optimum values, and these values depend on the selection of the starting point. Also there is no information available on the relative error with respect to global optimum. To increase the success rate in the hill climbing method, it is executed for large number of randomly selected different starting points. On the other hand, GA is a multidirectional search maintaining a population of potential solutions and encourages information formation and exchange between these directions. Furthermore, SA is a generic probabilistic meta-algorithm for the global optimization problem, namely, locating a good approximation to the global optimum of a given function in a large search space. SA has been used in various combinatorial optimization problems and has been particularly successful in circuit design problems (see [50]).

4.1. Differential Evolution (DE)

Differential evolution [46] is a population-based optimization method that works on real-number-coded individuals. For each individual 𝑥𝑖,𝐺 in the current generation 𝐺, DE generates a new trial individual 𝑥𝑖,𝐺 by adding the weighted difference between two randomly selected individuals 𝑥𝑟1,𝐺 and 𝑥𝑟2,𝐺 to a third randomly selected individual 𝑥𝑟3,𝐺. The resulting individual 𝑥𝑖,𝐺 is crossed over with the original individual 𝑥𝑖,𝐺. The fitness of the resulting individual, referred to as perturbated vector 𝑢𝑖,𝐺+1, is then compared with the fitness of 𝑥𝑖,𝐺. If the fitness of 𝑢𝑖,𝐺+1 is greater than the fitness of 𝑥𝑖,𝐺, 𝑥𝑖,𝐺 is replaced with 𝑢𝑖,𝐺+1; otherwise, 𝑥𝑖,𝐺 remains in the population as 𝑥𝑖,𝐺+1. Deferential evolution is robust, fast, and effective with global optimization ability. It does not require that the objective function is differentiable, and it works with noisy, epistatic, and time-dependent objective functions (see Pseudocode 2).

1. Input: 𝐷 , 𝐺 m a x , 𝑁 𝑃 4 , 𝐹 ( 0 , 1 + ) , 𝐶 𝑅 [ 0 , 1 ] , and initial bounds: 𝑥 ( 𝑙 𝑜 ) , 𝑥 ( 𝑖 ) .
2. Initialize: 𝑖 𝑁 𝑃 𝑗 𝐷 𝑥 𝑖 , 𝑗 , 𝐺 = 0 = 𝑥 𝑗 ( 𝑙 𝑜 ) + 𝑟 𝑎 𝑛 𝑑 𝑗 [ 0 , 1 ] ( 𝑥 𝑗 ( 𝑖 ) 𝑥 𝑗 ( 𝑙 𝑜 ) ) 𝑖 = { 1 , 2 , , 𝑁 𝑃 } , 𝑗 = { 1 , 2 , , 𝐷 } , 𝐺 = 0 , 𝑟 𝑎 𝑛 𝑑 𝑗 [ 0 , 1 ] [ 0 , 1 ]
3 . W h i l e 𝐺 < 𝐺 m a x 𝑖 𝑁 𝑃 4 . M u t a t e a n d r e c o m b i n e 4 . 1 𝑟 1 , 𝑟 2 , 𝑟 3 { 1 , 2 , . . . . , 𝑁 𝑃 } , r a n d o m l y s e l e c t e d , e x c e p t 𝑟 1 𝑟 2 𝑟 3 𝑖 4 . 2 𝑗 𝑟 𝑎 𝑛 𝑑 { 1 , 2 , , 𝐷 } , r a n d o m l y s e l e c t e d o n c e e a c h 𝑖 4 . 3 𝑗 𝐷 , 𝑢 𝑗 , 𝑖 , 𝐺 + 1 = 𝑥 𝑗 , 𝑟 3 , 𝐺 + 𝐹 ( 𝑥 𝑗 , 𝑟 1 , 𝐺 𝑥 𝑗 , 𝑟 2 , 𝐺 ) i f ( 𝑟 𝑎 𝑛 𝑑 𝑗 [ 0 , 1 ] < 𝐶 𝑅 𝑗 = 𝑗 𝑟 𝑎 𝑛 𝑑 ) 𝑥 𝑗 , 𝑖 , 𝐺 o t h e r w i s e 5 . S e l e c t 𝑥 𝑖 , 𝐺 + 1 = 𝑢 𝑖 , 𝐺 + 1 i f 𝑓 ( 𝑢 𝑖 , 𝐺 + 1 ) 𝑓 ( 𝑥 𝑖 , 𝐺 ) 𝑥 𝑖 , 𝐺 o t h e r w i s e 𝐺 = 𝐺 + 1

There are some versions for optimization by mean differential evolution, and two standard versions of DE, concretely DERand1Bin, and DERand2Bin were chosen for optimization of chemical reactor.

4.2. Self-Organizing Migrating Algorithm (SOMA)

SOMA is a stochastic optimization algorithm that is modelled on the social behaviour of co-operating individuals [47, 48]. It was chosen because it has been proved that the algorithm has the ability to converge towards the global optimum [47, 48]. SOMA works on a population of candidate solutions in loops called migration loops. The population is initialized randomly distributed over the search space at the beginning of the search. In each loop, the population is evaluated, and the solution with the highest fitness becomes the leader 𝐿. Apart from the leader, in one migration loop, all individuals will traverse the input space in the direction of the leader. Mutation, the random perturbation of individuals, is an important operation for evolutionary strategies (ES). It ensures the diversity amongst the individuals, and it also provides the means to restore lost information in a population. Mutation is different in SOMA compared with other ES strategies. SOMA uses a parameter called PRT to achieve perturbation. This parameter has the same effect for SOMA as mutation has for GA. The PRT vector defines the final movement of an active individual in search space.

The randomly generated binary perturbation vector controls the allowed dimensions for an individual. If an element of the perturbation vector is set to zero, then the individual is not allowed to change its position in the corresponding dimension. An individual will travel a certain distance (called the path length) towards the leader in n steps of defined length. If the path length is chosen to be greater than one, then the individual will overshoot the leader. This path is perturbed randomly. For an exact description of use of the algorithms see [47, 48] for SOMA. Pseudocode of SOMA is (as shows in Pseudocode 3).

Input: N, Migrations, PopSize 2 , 𝑃 𝑅 𝑇 [ 0 , 1 ] , 𝑆 𝑡 𝑒 𝑝 ( 0 , 1 ] , M i n D i v ( 0 , 1 ] ,
  Path Length (0,5], Specimen with upper and lower bound 𝑥 𝑗 ( 𝑖 ) , 𝑥 𝑗 ( 𝑙 𝑜 )
Inicialization: 𝑖 𝑃 𝑜 𝑝 𝑆 𝑖 𝑧 𝑒 𝑗 𝑁 𝑥 𝑖 , 𝑗 , 𝑀 𝑖 𝑔 𝑟 𝑎 𝑡 𝑖 𝑜 𝑛 𝑠 = 0 = 𝑥 𝑗 ( 𝑙 𝑜 ) + 𝑟 𝑎 𝑛 𝑑 𝑗 [ 0 , 1 ] ( 𝑥 𝑗 ( 𝑖 ) 𝑥 𝑗 ( 𝑙 𝑜 ) ) 𝑖 = { 1 , 2 , . . . , 𝑀 𝑖 𝑔 𝑟 𝑎 𝑡 𝑖 𝑜 𝑛 𝑠 } , 𝑗 = { 1 , 2 , . . . , 𝑁 } , 𝑀 𝑖 𝑔 𝑟 𝑎 𝑡 𝑖 𝑜 𝑛 𝑠 = 0 , 𝑟 𝑎 𝑛 𝑑 𝑗 [ 0 , 1 ] [ 0 , 1 ]
W h i l e 𝑀 i g r a t i o n s < 𝑀 𝑖 𝑔 𝑟 𝑎 𝑡 𝑖 𝑜 𝑛 𝑠 m a x i 𝑃 𝑜 𝑝 𝑆 𝑖 𝑧 𝑒 𝑊 𝑖 𝑙 𝑒 𝑡 𝑃 𝑎 𝑡 𝐿 𝑒 𝑛 𝑔 𝑡 𝑖 𝑓 𝑟 𝑛 𝑑 𝑗 < 𝑃 𝑅 𝑇 𝑝 𝑎 𝑘 𝑃 𝑅 𝑇 𝑉 𝑒 𝑐 𝑡 𝑜 𝑟 𝑗 𝑥 = 1 𝑒 𝑙 𝑠 𝑒 0 , 𝑗 = 1 , , 𝑁 𝑀 𝐿 + 1 𝑖 , 𝑗 = 𝑥 𝑀 𝐿 𝑖 , 𝑗 , 𝑠 𝑡 𝑎 𝑟 𝑡 + ( 𝑥 𝑀 𝐿 𝐿 , 𝑗 𝑥 𝑀 𝐿 𝑖 , 𝑗 , 𝑠 𝑡 𝑎 𝑟 𝑡 ) 𝑡 𝑃 𝑅 𝑇 𝑉 𝑒 𝑐 𝑡 𝑜 𝑟 𝑗 𝑓 ( 𝑥 𝑀 𝐿 + 1 𝑖 , 𝑗 ) = i f 𝑓 ( 𝑥 𝑀 𝐿 𝑖 , 𝑗 ) 𝑓 ( 𝑥 𝑀 𝐿 𝑖 , 𝑗 , 𝑠 𝑡 𝑎 𝑟 𝑡 ) e l s e 𝑓 ( 𝑥 𝑀 𝐿 𝑖 , 𝑗 , 𝑠 𝑡 𝑎 𝑟 𝑡 ) 𝑡 = 𝑡 + 𝑆 𝑡 𝑒 𝑝 𝑀 i g r a t i o n s = 𝑀 i g r a t i o n s + 1

Now a day, there are some version of algorithm SOMA. In this work I have used two strategies of SOMA for optimization and predictive control of a chemical reactor. They are “All to One” (SOMAATO), that is, the worst version and “All to One Random” (SOMAATOR), that is, the best version of SOMA.

There are some version of algorithm SOMA. In this work I have used two strategies of SOMA for optimization and predictive control of a chemical reactor. They are “All to One” (SOMAATO), that is, the worst version, and “All to One Random” (SOMAATOR), that is, the best version of SOMA as follow.(i)All to one—this strategy was described in previous section. “All to one” means that all subjects in population migrate to the leader (except leader itself). (ii)All to one random is strategies, in which all individuals move back to one individual (leader), which is not the deepest position on the hyperplane, but it is on the migration of individuals of each randomly selected from the population. Here emerged possible modification of this strategy, such that the individuals do not select randomly, but as appropriate, as is the case of genetic algorithms.

4.3. Genetic Algorithm (GA)

Genetic algorithms (GA) imitate the evolutionary processes with emphasis on genotype-based operators (genotype/phenotype dualism). The GA works on a population of artificial chromosomes, referred to as individuals. Each individual is represented by a string of 𝐿 bits. Each segment of this string corresponds to a variable of the optimizing problem in a binary encoded form.

The population is evolved in the optimization process mainly by cross-over operations. This operation recombines the bit strings of individuals in the population with a certain probability Pc. Mutation is secondarily in most applications of a GA. It is responsible to ensure that some bits are changed, thus, allowing the GA to explore the complete search space even if necessary alleles are temporarily lost due to convergence.

Pseudocode 4 describes the general principle of a genetic algorithm.

t = 0;
initialize(p(t=0));
evaluatae(P(t=0));
While is NotTerminated() do
𝑃 𝑐 (t) = reproduction( 𝑃 𝑝 );
mutace( 𝑃 𝑐 (t));
evaluate( 𝑃 𝑐 (t));
P(t+1) = buildNextGenerationForm(Pc(t),
P(t));
t=t+1;
end

4.4. Simulated Annealing (SA)

Simulated annealing (SA) is based on the similarity between the solid annealing process and solving combinatorial optimization problems [50]. SA consists of several decreasing temperatures. Each temperature has a few iterations. First, the beginning temperature is selected, and an initial solution is randomly chosen. The value of the cost function based on the current solution (i.e., the initial solution in this case) will then be calculated. The goal is to minimize the cost function. Afterwards, a new solution from the neighborhood of the current solution will be generated. The new value of the cost function based on the new solution will be calculated and compared to the current cost function value. If the new cost function value is less than the current value, it will be accepted. Otherwise, the new value would be accepted only when the Metropolis's criterion [55], which is based on Boltzmann’s probability, is met. According to Metropolis’s criterion, if the difference between the cost function values of the current and the newly generated solutions (Δ𝐸) is equal to or larger than zero, a random number 𝛿 in [0,1] is generated from a uniform distribution. If (12) is met, the newly generated solution is accepted as the current solution: 𝛿𝑒Δ𝐸𝑇.(12) The number of new solutions generated at each temperature is the same as the iteration number at the temperature which is constrained by the termination condition. The termination condition could be as simple as a certain number of iterations. After all the iterations at a temperature complete, the temperature would be lowered based on the temperature-updating rule. At the updated (and lowered) temperature, all required iterations will have to be completed before moving to the next temperature. This process would repeat until the halting criterion is met. The halting criterion could be “reaching the pre-set minimum temperature.” The result of simulated annealing (SA) is related to the number of iterations at each temperature and the speed of reducing temperature. The temperature updating rule proposed in this paper is shown: Temperature=𝑇𝑒(𝑟𝑡),(13) where 𝑇 is the initial temperature, 𝑟 the cooling ratio, and 𝑡 the number of times the temperature has been lowered. The cooling ratio controls the speed of cooling. The higher the cooling ratio, the faster the temperature cools down.

In this work I have chosen two versions of SA algorithms: SA elitism (SA_Elitism) and SA without elitism (SA_NoElitism) for investigation on optimization of a chemical reactor.

4.5. Usage of Elitism

It uses synchronization at the end of temperature phase; otherwise, the communication proceeds asynchronous after each iteration. (i)Disadvantage: of this approach lies in excessive communication, which results in computation time increase.(ii)Advantage: elitism removes problem with the acceptance of worse solutions at low temperature phase.

5. Static Optimization of Reactor

The above-described reactor, in the original setup, gives unsatisfactory results. To improve reactor behavior, static optimization was performed using the algorithms SOMA, DE, GA, SA. In this work the optimization was performed by the following optimization of batching-value reactor’s parameter geometry.

5.1. The Cost Function (CF)

In this optimization the point was to minimize the area arising as a difference between the required and real temperature profile of the reaction mixture in a selected time interval, which was the duration of a batch cycle. The required temperature was 97°C (370.15 K). The cost function that was minimized is given in: 𝑓cos𝑡=𝑡𝑡=0||[𝑡]||,𝑤𝑇(14) where: 𝑤: control point, 𝑇: temperature.

The CF has been calculated in general from the distance between desired state and actual system output.

5.2. Parameter Settings

The parameter settings have been found empirically and are given in Table 1 (SOMA) and Table 2 (DE). In Tables 3 and 4 are parameters’ setting for GA and SA. The main criterion for this setting was to keep the same setting of parameters as much as possible and of course the same number of cost function evaluations as well as population size (parameter PopSize for SOMA, GA, and NP for DE). Number of optimized reactor parameters and their range inside represents in Table 5.

6. Experimental Results

Due to the fact that EAs are partly of stochastic nature, a large set of simulations has to be done in order to get data for statistical data processing. Four algorithms (SOMA, DE, GA, and SA) have been applied 100 times in order to find the optimum of process parameters and the reactor geometry. All important data has been visualized directly and/or processed for graphs demonstrating performance of four algorithms. Estimated parameters and their diversity (minimum, maximum, and average) are depicted in Figures 3 and 4. From those pictures it is visible that results from four algorithms are comparable. For the demonstration are graphically the best solutions shown in the subfigures (b), (d), (f), and (h) of Figures 511. There is shown time dependence of process parameters from four algorithms. The best values of parameter setting are recorded in Tables 6 and 7. All one hundred simulations diversity (minimum, maximum and average) were described from Table 8 to 14 for each version of four algorithms. On Figures 511 are, for example, shown records of all 100 simulations and the best solutions of all 100 simulations (Figures 5 and 6 for SOMA, Figures 6 and 7 for DE, Figure 8 for GA, and Figures 9 and 10 for SA).

6.1. Parameter Diversity for Repeated 100 Times Simulations

See Tables 814.

6.2. Graphics Results

See Figures 311.

7. Discussion to the Result Optimization

This work has presented a systematic procedure to derive a solution model for operation of a dynamic chemical reactor process. The results produced by the optimizations depend not only on the problem being solved but also on the way of how to define a given function. All simulations were repeated 100 times for each EA with the same initial conditions for each simulation.

The differences between four methods SOMA, DE, GA, and SA are best seen in Tables 5, 6, and 7. The first part shows the parameters of batch reactor designed by an expert, and the second part shows the parameters obtained through static optimization.

Calculation was 100 times repeated and the best, worst, and average results (individual) were recorded from the last population in each simulation. All one hundred triplets (best, worst, average) were used to create Figures 3 and 4.

Four algorithms (SOMA, DE, GA, SA) have been applied 100 times in order to find the optimum of process parameters and the reactor geometry. The primary aim of this comparative study is not to show which algorithm is better or worse, but to show from the outputs of all simulations depicted in Figures 511 that evolution SOMA represents the best solution from actual simulation more than DE, GA, and SA. Based on data from all simulations, four comparisons can be done. From parameter variation of view, the estimated parameters depicted in Figures 3 and 4 show that four algorithms are comparable in performance (with small deviations).

From the graphs, it is evident that the courses of SOMA algorithm are more densities in thin spectra and not far from the start of mass axis (see Figure 5(a)). Only few values drift out of the spectra. The results of the DE algorithm are faster in the weight spectra. From these results we may conclude that SOMA has much better convergence than DE, GA, SA algorithms (see Figures 511). For better overview of comparison between four algorithms, I have chosen process temperature of reaction mixture 𝑇, shown in Figure 12.

In Figure 12 we can see the process parameter temperature 𝑇 simulation by SOMA were of more stability than other algorithms (concretely, in this experimental problem of batch reactor).

From the obtained results, it is possible to say that all simulations give satisfactory results, and; thus, evolutionary algorithms are capable of solving this class of difficult problems and the quality of results does not depend only on the problem being solved, but they are extremely sensitive for the proper definition of the cost function, selection of parameter setting of evolutionary algorithms.

8. General Conclusion and Further Research Perspective

In this paper, evolutionary algorithms were used for static optimization of chemical reactor in order to improve the quality of this behaviour in the uncontrolled state. The optimization tool has been described and selected four EAs (SOMA, DE, SA, GA), especially for a certified high robustness and ability to successfully solve complex optimization problems, especially of chemical reactor.

In fact, using the methods of article intelligence, mainly, the evolutionary computation techniques, can be used in such a difficult task which is analyzed and optimized of a nonlinear system, especially of given chemical reactor. The main aim of paper was focused on the examples of EAs implementation to methods for chemical reaction for the purpose of obtaining better results, which means faster reaching of desired stable state and superior stabilization and could be robust and effective to optimize difficult problems in the fields of chemical engineering.

Basic optimization process presented here was based on a relatively simple function. Unless the experiment is limited by technical issues when searching for optimal parameters, there is no problem in defining more complex functional including as subcriteria, for example, stability, costs, time-optimal criteria, controllability, or their arbitrary combinations.

Future research of evolutionary algorithms SOMA, DE, GA, SA is still open. According to all results obtained during this research, it is planned that the main activities would be focused on the expanding of this study for other chemical dynamic systems.

From the results of paper, we can conclude that EAs have shown great potential and ability to solve complex problems of optimization, not only at the field of chemical engineering process but also in diverse industrial fields.

Acknowledgments

This work is part of the science activities at the Ton Duc Thang University, Ho Chi Minh City, Vietnam. The author would like to thank his professor I. Zelinka from the Department of Informatics and Artificial Intelligence, the Tomas Bata university in Zlín, Czech Republic, who inspired his writing skill and supported him with a very professional way in research.