Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2013, Article ID 560614, 13 pages
http://dx.doi.org/10.1155/2013/560614
Research Article

CFSO3: A New Supervised Swarm-Based Optimization Algorithm

Department of Engineering, Roma Tre University, Via V. Volterra 62, 00146 Rome, Italy

Received 3 May 2013; Accepted 7 July 2013

Academic Editor: Orwa Jaber Housheya

Copyright © 2013 Antonino Laudani et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We present CFSO3, an optimization heuristic within the class of the swarm intelligence, based on a synergy among three different features of the Continuous Flock-of-Starlings Optimization. One of the main novelties is that this optimizer is no more a classical numerical algorithm since it now can be seen as a continuous dynamic system, which can be treated by using all the mathematical instruments available for managing state equations. In addition, CFSO3 allows passing from stochastic approaches to supervised deterministic ones since the random updating of parameters, a typical feature for numerical swam-based optimization algorithms, is now fully substituted by a supervised strategy: in CFSO3 the tuning of parameters is a priori designed for obtaining both exploration and exploitation. Indeed the exploration, that is, the escaping from a local minimum, as well as the convergence and the refinement to a solution can be designed simply by managing the eigenvalues of the CFSO state equations. Virtually in CFSO3, just the initial values of positions and velocities of the swarm members have to be randomly assigned. Both standard and parallel versions of CFSO3 together with validations on classical benchmarks are presented.

1. Introduction

Exploration and exploitation are the two fundamental requirements for the algorithms devoted to inverse problems and/or optimization. In many applications more than one global optimum could exist, and/or the space of the solutions to be investigated could show a very large dimension. In all these cases, the risk of remaining entrapped into a local minimum is quite high, especially if the algorithm in use is not able to explore large spaces [1]. On the other hand, in other cases, it is necessary to converge to a solution granting a high level of accuracy. These conditions are difficult to be satisfied if the algorithm is not able to perform good exploitation [2]. Thus, apparently, exploration and exploitation seem to be antithetic requirements. With the aim to match these two properties in only one strategy of optimization, we aim to present in this paper CFSO3 that takes its main inspiration from swarm algorithms and in particular from their translation into continuous dynamic systems.

From a historical point of view, the swarm optimization has been introduced by Kennedy and Eberhart [3] in the 1990s with the Particle Swarm Optimization (PSO), a heuristic inspired to the social and collective behaviour shown by several animal species such as flock of birds or school of fishes [4, 5]. Virtually, it consists of a one-to-one correspondence between the motion of flocks searching for food and the iterative steps of algorithms searching for the best solution for optimization. Many authors have published a large quantity of works related to PSO, and the vastness of applications into different fields of science, such as engineering, physics, chemistry, artificial intelligence, and economics, testifies its success among many scientific communities (e.g., see [69] and the references within). A large series of changes from the original PSO have been proposed in order to improve its performances. In particular, many works focused on the way to manage the tuning of parameters for achieving better convergence to the global optimum and/or for improving exploration for multimodal problems (e.g., see [1019] and the reference within), and the effects of topological rules on performances have been discussed [2026]. The basic idea of topological rules is to link each member of the swarm with others by generating more complex information exchange among particles than the simple use of the global best. A particular way to link the particle behaviors is achieved by using a kind of neighbor-velocity matching, that is, the idea of the Flock-of-Starlings Optimization (FSO) [27, 28], inspired by a naturalistic work presented in [29]. The FSO uses topology for exchanging the information about the current velocity of each connected particle/bird. Recently, many authors have proposed several attempts to treat the swarm numerical algorithms as continuous dynamical systems. For example, important contributions are in [3032]. The main goal of these works is to investigate the stability of PSO and the effects on the optimization performance produced by different settings of the parameters. In this scenario, the authors of the present paper have translated the numerical FSO equations into the state equations of a kinetic dynamic system in the time domain (continuum), by proposing the so-called Continuous Flock-of-Starlings Optimization (CFSO) [3335]. This new optimizer, that is, the starting point of the present paper, turns out to be quite effective, thanks to the use of mathematical closed forms which describe the swarm member trajectories as function of time, since it opens the road to all those mathematical instruments used for studying state equations. In addition, CFSO has several features that make it very interesting and promising such as, among other ones, the setting of parameters without using randomness. Thanks to the stability analysis of the CFSO state equations, the tuning of the parameters becomes the way for controlling the stability of the trajectories within the space of solutions, that is, for controlling the exploration and the exploitation, since the parameter values fix the values of the eigenvalues of the dynamic system (e.g., see [3538] and the reference within).

In this paper, results derived from investigation on this distinguishing behaviour of the CFSO are discussed, in order to verify the effectiveness of the innovative nonstochastic approach characterizing this continuous swarm optimizer. In particular, the present paper shows a hybridization of three different features of CFSO, which allow us to enhance both exploration and convergence. Indeed, the changing from exalting exploration to giving a boost to the convergence/refinement is achievable simply by means of a suitable setting of the real part of pole values (or the eigenvalues) related to the CFSO state equations: in particular pure imaginary poles perform the exploration of the whole space of solutions; instable poles allow to escape from local minima, whereas refinement of the solution is obtained by launching an asymptotically stable CFSO. Only the initial position and velocity of each individual (birds) are randomly assigned.

Validations on classical benchmarks in order to show the advantages of the proposed continuous approach are presented, and the CFSO implementation has been made available for downloading at the link indicated in [39].

2. Recall of the Continuous Flock-of-Starlings Optimization: From Numerical Algorithms to Dynamic System State Equations

In [33, 35] it has been proved that it is possible to convert the numerical swarm optimization algorithm, PSO or FSO, into continuous time-domain dynamical kinetic systems described by a set of state equations. Virtually, the state equations equivalent to the rules used by the swarm algorithms for updating velocity and position of generic th particle are where, is the dimension of the solution space, and , , andare the so-called inertial, cognitive, and social coefficients, respectively;is the th component of the global best of the whole swarm, whereasis the th component of the personal best of the th particle;(if the th bird is controlling the th one,otherwise) is the term that transforms the PSO into the FSO. This term, which strongly modifies the collective behaviour of the flock as shown in [27, 28], comes from the observation [29] that, in real flock each generic bird controls and follows the flight of a number of other members of the flock, no matter what are their positions inside the flock. Finally, the personal best and global best have been assumed to be the excitations of the dynamic system by writing the quantity By posingit is possible to rewrite (1) in the following form: Equations (2) and (4) describe the state equations of a continuous dynamic system. Thus, if we consider a flock made of birds, the state-equation system of the CFSO has dimension, and it can be expressed, for each component related to the th dimension, as follows (from now on, in order to simplify the notation the apex will be dropped; consequently, all expressions must be implicitly assumed to be valid for each th generic component): Submatrix appearing in (5) is theidentity matrix. square submatrixis defined as Matrix takes into account the neighbor-velocity-matching rule and for this reason is called neighbor-velocity-matching matrix. The nonzero values ofare the nondiagonal entries for which this means that the th bird is controlling the velocity of theth one. It is evident that, just considering the absence of the matrixin (6), it is possible to commutate the CFSO to a continuous PSO (CPSO). Finally, the last submatrix appearing in (5) is whereas vectorhas each th row-entryof (3).

2.1. Solution of the CFSO State Equations

Let us assume that the space where the solution lies is bounded, and then, also the global and personal best positions will be bounded and consequently we can apply the Laplace Transform, , in the variable to the system (5) as follows: In (8), both and are column vectors (from now on, for briefness, we will indicate and ). They take into account the initial conditions related to velocities and positions, respectively, while each single is Finally, by solving (8) in the Laplace domain for, we have Now, let us now decompose the response (solution) for the th bird as the superposition of the free response,, and the forced response,; that is, . Thus, we have In this way, we can collect together global and all personal bests inside the forced solution (14), whereas the initial conditions appear just in the free solution (13). This is a crucial fact since it means that, for studying the collective behaviour of the swarm, we have to analyze just the free response (13) that is not influenced by the habitat (i.e., global and personal bests valuated by the fitness function), but just by the initial conditions related to each swarm member. Clearly, after having written the solution of the state equations in the Laplace domain, we have to translate the solution in the time domain. But the Laplace anti-transformation is not always possible, since the inverse of matrix could not be simply evaluated in an analytical way. Indeed, if we write the neighbour velocity-matching, that is, matrix (), without a pre-fixed order, the only way to solve the CFSO equations in the time domain is to adopt a numeric ordinary differential equation solver, for example, by means of a numerical Runge-Kutta algorithm (ODE suite of MATLAB). On the other hand, it has been proved in [35] that a particular ordered choice of the matrix describing the neighbour-velocity matching allows to analytically integrate the CFSO equations, by means of the inverse Laplace transforming, and closed-form expressions, in the time domain, are easily available. The fundamental advantage using closed forms is that they remarkably reduce the computational cost making easier the setup of the optimization problem. In addition they can be directly used to update position and velocity, as it will be shown in next sections.

Let us show herein the way in which it is possible to achieve closed forms. For simplicity, but without loss of generality, let us consider the so-called fully connected CFSO, which occurs if ; that is, each th bird controls all the othermembers of the flock. This means that all the non-diagonal entries of the neighbour-velocity-matching matrixare set to a value equal to : Thus, the matrices appearing in the solution of the system (12) become, respectively, The values of the entries of the inverse,, related to the symmetric matrix (16) areThus, by compacting in one single term the solution can be finally expressed by

2.2. Time Windowing

Now, in order to solve the system of state equations (20), a further consideration must be done since the force term (vector ) is not a priori known: its elements depend on the trajectories followed by the birds. Consequently they must be evaluated dynamically according to the several fitness values which are monitored during the motion of the swarm/flock. Moreover, it is worth noticing that we are interested in the building of an optimizer, and, consequently, it is our aim to evaluate exactly the fitness function in correspondence with specified positions reached by each member of the flock. In order to address this goal, we subdivide the time axis into a sequence of time windows, each one having duration equal to a value. In this way, we can assume, for each th time window TW, that the excitation (3) is a constant value equal to the one evaluated at the end of the previous th TW. It follows that, for a generic that begins at the generic timeand ends at the time , we assume that the th entries of the force terms (3) are equal to a constant value, that is, for any time value within that th window. It is worth noticing that when the CFSO system would be really continuum, but the fitness function should be evaluated infinite times. Anyway, for finite values of , the system (5) is still time invariant within each time window. As a consequence, it is still possible to use Laplace tansforms in the previously described way. Now, the only difference is that we will obtain analytical closed forms, for the position and the velocity of each bird of the flock, valid just for a TW. The final solution on the desired time of observation will be obtained by the union of the solutions returned TW-by-TW, with the continuity being guaranteed by the initial conditions that are always evaluated at the beginning of each new TW.

2.3. Closed Forms for the Continuous-Flock-of-Starlings Optimization

On the basis of the previous assumptions, the Laplace transform related to (3), valid for the th particle/bird, is (and clearly for the generic component th) within the th TW: By elaborating and reordering (20), we obtain the following explicit expression for the Laplace transform of each th member of the flock: Now it is possible to evaluate the inverse Laplace transform. Firstly, the nonzero poles of (22) are If we assume that the parameters satisfy ,, all poles will have multiplicity equal to 1. Under this assumption, by applying to (22) the classic residual method for the evaluation of the inverse Laplace transforms, the time domain response is finally obtained: As it has been already said, (25) is valid for a generic th particle/bird just within a single time window having widthand starting from(i.e., ). It is the closed form which describes how the particle/bird changes its th coordinate during time; that is, (25) describes the portion of the trajectory along the th dimension followed by the th member of the flock within that time window. The corresponding closed form of the velocity along the th dimension is trivially obtainable by derivation versus time: These closed forms must be used as updating rules for the position and velocity of each bird at each step/time window of amplitude variable from a time window to another, just by using . In addition the knowledge of the analytical expression related to the poles allows us to calculate the conditions on CFSO parameters, which provide the full asymptotical stability: A Matlab implementation (version 1.0) of the previously presented closed forms can be downloaded from the link in [39].

2.4. Continuous PSO versus CFSO: Effect of the Neighbour-Velocity Matching on the Collective Behaviour

The stability conditions give us the further possibility to analyze the effect of topological rules on the collective behavior of the swarm/flock members. Firstly, it is worth noticing that we can obtain the close form expressions for a continuous PSO just by posingin (25) and (26), obtaining (after some substitutions) As it is possible to observe, for the continuous PSO, the way in which the velocity and the position of a genericth particle (28) change during time takes into account the effects of terms which derive from the free response (and) and other terms which derive from the forced response (and). Thus, if we consider just the free response of the system, we will have the loss of any contribution coming from other particles (global and personal bests) since the free response of the continuous PSO depends only on the behaviour of the particle itself. On the contrary, this is not true, for the CFSO, since, in this case, the free response of a generic bird still depends on the behaviour of other members of the flock and the collective behavior exists even if we cancel the contribution of global and personal bests. These observations suggest that the collective behaviour is always present for the CFSO (and consequently for FSO), whereas the CPSO and PSO are rather based on a forced social behaviour due to the action of the “global” best.

2.5. CFSO Algorithm

The CFSO algorithm can be summarized by the following pseudo-code valid for a generic fitness function, , to be minimized in the search space, having dimension , with:for.

Initialization

Set:(i) is the total number of elements into the flock;(ii) number of birds into the flock controlled by one single bird,. (iii) Inertial coefficient , cognitive coefficient, , social coefficient, , and topological coefficient (i.e., choice of the typology of the poles)(iv)Maximum number of TWs, ;(v)Velocities for each th bird;(vi)Position of each th bird(vii)Initial personal fitness ;(viii)Initial global fitness ;(ix)Fitness threshold goal_fitness = arbitrary small;

Main Loop

For each th particle and for TW of amplitude , until we reach the assigned maximum number of TW, (i) update, for each th particle, position and velocity at end of considered TW by using (25) and (26) computed for . (let us indicate withand respectively the position and the velocity at the end of considered TW)(ii) evaluate fitness(iii) Ifis better than the personal best fitness of the th particle, then assign current position as personal best position and update the personal best fitness: (iv) If is better than global best fitness, then assign current position as global best position and update the global best fitness: (v) Assign in new initial conditions, and excitations() and

End for

It is worth noticing again that for each TW the excitation is constant values and from a TW to the successive the continuity of the trajectories is guaranteed by means of the initial conditions, in terms of positions and velocities, evaluated as the last values related to the previous TW.

2.6. Assignments of Poles for Different Flock Behaviours

The individuals of a swarm can be driven through three different type of trajectories for three different assignments of the poles: couple of conjugate pure imaginary poles (stable system); poles with real part negative (asymptotically stable system); and poles with real part positive (unstable system). With the aim to investigate deeper on the free response of the CFSO, let us assume the global and all personal bests to be equal to constant values for the whole duration of the simulation. This last assumption allows to investigate the free response since no variation for global as well for personal bests is possible. For the sake of simplicity we refer to a bidimensional space since in this way we have the possibility to plot the trajectories followed by the birds.

2.6.1. Couple of Conjugate Pure Imaginary Poles

In this case, by the previous assumption, the forced terms are not allowed to change from a TW to another one; we can look at the trajectory after the end of the transient phase. Since we have imposed a couple of conjugate pure imaginary poles for at least one member of the swarm, the relative components of both position and velocity show a periodic behaviour. Consequently each bird having these kinds of poles will describe (in 2D case) a closed loop as Figure 1 shows. In terms of optimum searching this kind of trajectory can be interpreted as a path made by one member which explores the surrounding space, in order to search for a possible new better solution.

560614.fig.001
Figure 1: Typical trajectories of birds in the flock for pure imaginary poles. It is possible to see the 30 birds starting around the origin, making 30 closed loops.
2.6.2. Poles with Negative Real Part (Asymptotically Stable System)

Clearly the previously described trajectories are not useful from the viewpoint of accuracy in detecting optimum solutions. Thus, in order to refine the values of candidate solutions which could lie in a subspace investigated by members with conjugate pure-imaginary poles, the values of these poles are converted by setting their real part as negative. Indeed, according to (25) and (26), when the transient ends the positions are distributed around the global best and all the velocities have null values. Moreover, during the transient phase, the birds follow convergent trajectories towards the global best (see Figure 2), and consequently a refinement of the solution is obtainable.

560614.fig.002
Figure 2: Typical trajectories of birds in the flock for asymptotically stable poles. The convergence in the region containing global best is evident that is coincident with the origin of the axes. The effect of refinement can be seen by comparing the values of and with those of the previous Figure 1.
2.6.3. Poles with Positive Real Part (Unstable System)

The last case presented herein is the one concerning poles with positive real part. As previously stated for pure imaginary poles, it is possible that in some situation the birds performing CFSO have fallen in a region containing a local minimum. It is important to have a strategy for escaping from that situation for restarting the exploration. This can be easily done by choosing poles with a positive real part. Indeed, in this case the trajectories will be forced to diverge also in presence of a local minimum (see Figure 3). In addition if the poles have also an imaginary part other than zero, the oscillatory behaviour is combined with the exponential and then they will escape from the entrapping region by following a sort of spiral (unstable focus). Clearly this kind of poles could lead the birds to exit from domain; thus they must be used only for few TWs (eventually just one TW if we choose at the same time a high value for TW amplitude, τ, as will be discussed in the next section).

560614.fig.003
Figure 3: Typical trajectories of birds in the flock for unstable poles. The trajectories of 10 birds, starting also in this case around the origin, diverge after few steps, and consequently they escape from minimum.
2.7. Role of the Parameter

As previously stated, an important feature of the CFSO is the possibility to predict the behaviour of particles/birds simply studying the poles of Laplace transforms for different TWs. Indeed, by changing these poles we can force the trajectories of the swarm members to converge towards a point, to escape from a possible local minimum, or to stay into a limited zone (closed loops) of the solution space. In this way we can improve the capabilities of exploration and exploitation simply by changing the poles of state equations system at each time window. Thus, by a suitable choice of the poles from a TW to another one, it is possible to control the whole behaviour of the swarm members. On the contrary, the noncontinuous but simply numerical algorithms employ the random updating of parameters to avoid local minima. By using the closed forms of CFSO, the randomness of parameters can be eliminated because the user a priori knows how he must manage the parameters to obtain convergence or divergence, that is, how to exalt the exploitation rather than the exploration or vice versa. In addition, at any time during the optimization procedure, the CFSO can refine the found solution through a suitable decreasing of the TWs amplitudes, as it will be discussed in the Validation. Indeed, as shown in Figure 4, it is always possible to refine the solution simply by using small TW amplitudes together with negative real part poles: small TWs return small displacements (consequently a large number of personal bests is investigated), and the refining of values of the candidate solutions is obtainable. In the same way, large TW amplitudes together with positive real part poles are useful to quickly escape from local minima.

560614.fig.004
Figure 4: Example of managing of time window amplitudes for refining candidate solution.

3. From CFSO to CFSO3

On the basis of the previous remarks on CFSO, for enhancing both exploration and convergence, a hybridization strategy similar to the one successfully followed in [40] has been developed. In [40] three different numerical heuristics have been used in such way to exalt their different abilities in exploration, or exploitation or refinement. In the present case, the same idea has been applied to take advantage from the different behaviours provided by CFSO depending on the different setting of its parameters (i.e., the nature of the poles). In particular, at the beginning of the whole optimization process, the CFSO is implemented by setting pure imaginary poles (CFSOpi). The CFSOpi performs exploration and whenever it finds a subregion in which there is a high probability of discovering a global minimum (briefly called “suspected region” according to [40]), two operations are made in cascade: the exploitation and the refinement of the solution, by launching a asymptotically stable CFSO (CFSOas), that is, a CFSO, in which the real part of poles is set to be positive; in such way we can exalt the exploitation of the “suspected region” for finding the value of the minimum that lies here (the CFSO population is initialized by means of the result that the CFSOpi had just found); the CFSO poles are then temporarily changed in unstable poles (CFSOus) for steps (after which pure imaginary poles CFSO is reset), for escaping from the minimum inside the exploited suspected region (since it could be a local minimum). This hybridization has been called CFSO3 in order to underline the three different aspects of CFSO which have been used in synergy. In other words, the optimization process is fully supervised; that is, the change of the nature of poles is a priori planned by a criterion based on monitoring the value of the cost function. In the setup of the procedure no stochastic parameter has been used with the exception of the starting positions and velocities of the birds. This means that if one launches CFSO3 starting from the same initial conditions, it will obtain the same identical results.

3.1. Parallel Architecture

CFSO3 provides its best performances on a distributed architecture, and the algorithm can be fully designed for a master-slave configuration for parallel computation [41, 42]. According to the different CFSO peculiarities related to different setting of the poles, CFSO3 uses imaginary poles and unstable poles just on the master node, whereas the refinement phase with asymptotically stable poles is performed on slave ones. Virtually, the CFSOpi performs exploration of the solution space on the Mater node and whenever it finds a “suspected region,” two simultaneous operations are made: (1)CFSOus is launched for steps (i.e., TWs) on master node (thereafter imaginary poles are reset and we return to the CFSOpi), in such a way to escape from the “suspected region” and to restart exploration; (2)CFSOas is launched on the first available slave node of the cluster with the aim to refine the solution into that “suspected region,” the population of the CFSOas with being initialized on the basis of the best result found by CFSOpi at the current iteration.

In more detail, the CFSOas will be left to run for a fixed number of iterations with a duration of TW (parameter ) around 10 times smaller than the one used in CFSOpi on the master node. While the processes on the slave nodes are running, the CFSOpi is provisionally substituted by the CFSOus for (generally not more than a couple) TWs; thereafter the CFSOpi is relaunched. CFSOpi still continues to explore the solution space on the Master node and whenever it finds a further “suspected region,” another CFSOas is launched on a different slave node and so on. After having delivered the found solution to the master, the slave node is ready to be utilized for a further exploration on a new “suspected region.” All final results coming from slave nodes are stored in a dedicated file that is located in a shared folder within the master node. Finally, when all processes have been concluded (i.e., all PC slaves have completed the refinement procedures previously described, and the CFSOpi on the master node has ended all the programmed iterations), a list of all results coming from slave nodes will be available within the master node storage file. Then, the best minimum (global optimum) is trivially identified from that list. The flow chart of the parallel architecture is shown in Figure 5.

560614.fig.005
Figure 5: Flow chart of parallel version of CFSO3.

4. Validation

The CFSO algorithm has been successfully validated for the solution of engineering design problems [33, 34]. Moreover, it has also been compared with the performances coming from a generic implementation of PSO for the solution of classical optimization benchmarks. It is worth noticing that all validations have been made by starting from the same initial conditions (positions and velocities) [35] with the aim of obtaining the same starting situations as for CFSO, and standard PSO.

Hereafter we follow a validation approach of the CFSO3 by considering the same tests presented in [35] and other tests on typical benchmarks [43, 44]. In particular, we have used the 2D version of the seven benchmarks proposed in [44]: F1 Shifted Sphere Function, F2 Schwefel’s Problem 2.21, F3 Shifted Rosenbrock’s Function, F4 Shifted Rastrigin’s Function, F5 Shifted Griewank’s Function, F6 Shifted Ackley’s Function, and the FastFractal “DoubleDip” function, from the fractal function benchmarking suite [44]. The choice of shifted version of benchmarks and the DoubleDip function has the aim to mitigate the effect of possible initialization biases. In particular, we compare the results obtained by CFSO in full connected standard implementation (that was already successfully compared with PSO in [35]) with those obtainable by the present CFSO3, implemented by following the flow chart described in Figure 6, that is, without the use of the parallel architecture. In practice, we repeat times a cycle in which a CFSOpi, two CFSOas, and CFSOus are sequentially executed, each one for 100 steps and by using the parameters reported in Table 1 (in all cases ). It is important to note that the deterministic nature of these algorithms causes the identical repetition of trajectories if the same set of guess values is used.

tab1
Table 1: Parameters used in CFSO3.
560614.fig.006
Figure 6: Flow chart of the CFSO3 implementation used for tests.

In order to adopt the same initial conditions for all cases, we have used the same positions and velocities as in [35] (100 tests for each benchmark). In particular, we have used 1000 steps for CFSO (the number of steps was the stop criterion adopted), which correspond to an amount of function evaluations (FEs) equal to 10000 for CFSO, whereas for CFSO3 the number of steps and FEs depend on according to the expression and . The performances for benchmarks (F1–F6) are presented in Table 2 in terms of success rate (% of success), that is, the percentage of tests, for which the error achieved is less than a prefixed threshold (chosen at 10−6 for F1, F3, and F5 and 10−4 for F2, F4, and F6).

tab2
Table 2: Success rate (%) of CFSO and CFSO3 for different number of cycles.

From Table 2 it is possible to see that the success rate increases by increasing the number of cycles executed (success rate around 70% and 97% even after 3 cycles for these examples). Clearly if we use , practically we make the refinement but we do not use CFSOus to escape from possible local minima. However, in this case it is also possible to appreciate an improvement of the performance. These results confirm that, thanks to the refinement approach obtained by CFSOas and to the capability of escaping from local minima given by CFSOus, we are able to find the solution of an optimization problem almost independently by the initial conditions imposed, that is, without using any stochastic parameters, but rather by a supervised optimization approach. In addition, it is worth noticing that the maximum error achieved after 3 cycles is of few percents, for all the benchmarks.

For the DoubleDip function, since we do not have any information about the values of the global minimum, we present the results by using a set of histograms (shown in Figure 7) representative of the % of tests giving the relative fitness values. As can be seen in Figure 7, the larger the number of iterations, the larger the number of tests giving a minimum percentage value around to 29.56 (the minimum value obtained among all tests is 29.56615 with respect to 29.5641 obtained in [35]).

fig7
Figure 7: The % of tests giving correspondent fitness values for the DoubleDip function.

Clearly, as previously stated, these results are just early validations, whereas further deeper investigations must be done in order to evaluate the effectiveness of such modification and make the setup of the continuous optimizer more efficient. For example, the nature of the benchmarks F2, F4, and F6, which present a higher gradient, should suggest to increase the number of steps devoted to refinement or to use smaller values for (in order not to make any change in the code used for the benchmark, the threshold was kept in this case at 10−4). Nevertheless the promising results obtained in validation confirm the goodness of the proposed supervised deterministic approach. In addition, with the aim of involving the scientific community in the use and validation of the developed CFSO code, we have provided its MATLAB version  1.0 in [39].

5. Conclusions

In this paper a supervised approach based on the hybridization of different CFSO, a heuristics that converts numerical swarm-based algorithms into analytical closed forms, has been presented. The use of the closed forms allows investigating the different types of trajectories that can be performed by swarm members by different parameter setting; that is, what is the influence of the values of parameters on the divergence or the convergence (exploration or exploitation) of the swarm-based algorithms? On the other hand, the development of the CFSO3 has allowed eliminating the random updating of parameters, a common praxis present in swarm-based algorithms: CFSO3 manages the parameters switching from a behaviour to another. All these items allow us to design a supervised optimizer, which combines exploration, escaping from minima, and refinement abilities (resp.: CFSOpi, CFSOus, and CFSOas). CFSO3 can be supported by both sequential and parallel implementations. Although further investigations would be necessary for a deeper analysis of this optimizer, the promising results returned from validations confirm the effectiveness of the proposed approach (a CFSO Matlab code is available in [39]). In addition the CFSO/CFSO3 equations, defining a continuous dynamic system, can be also hardware implemented by means of an analogical circuit, giving the possibility to put in practice new typology optimizers [34], suitable for the solution of real-time problems also in combination with hybrid artificial intelligence systems which use neural networks [45, 46].

Appendix

PSO as Approximation of CPSO Equations

It is interesting to note that the numerical rules of PSO can be obtained by (28) and (29) under some appropriate approximation. Let us start with (28) and rewrite it for :

By substituting the first-order approximation for the exponential that is, the PSO updating rule for positions.

In an analogous way it is possible to retrieve the PSO rule for updating velocity (29):

From this last equation, it is clear that, in order to obtain the correct numerical version of PSO, an equivalence relation must be established between the inertial parameter of the PSO algorithm and the inertial parameter of the CPSO:

References

  1. F. Riganti Fulginei and A. Salvini, “Comparative analysis between modern heuristics and hybrid algorithms,” The International Journal for Computation and Mathematics in Electrical and Electronic Engineering, vol. 26, no. 2, pp. 259–268, 2007. View at Publisher · View at Google Scholar · View at Scopus
  2. C. W. Liew and M. Labiri, “Exploration or convergence? Another metacontrol mechanism for Gas,” in Proceedings of the 18th International Florida AI Research Society Conference, pp. 251–256, May 2005. View at Scopus
  3. J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of IEEE International Conference on Neural Networks, vol. 4, pp. 1942–1948, Perth, Australia, December 1995. View at Scopus
  4. C. W. Reynolds, “Flocks, herds and schools: a distributed behavioral model,” Computer Graphics, vol. 21, no. 4, pp. 25–34, 1987. View at Google Scholar · View at Scopus
  5. F. Heppner and U. Grenander, “A stochastic nonlinear model for coordinated bird flocks,” in The Ubiquity of Chaos, S. Krasner, Ed., AAAS Publications, Washington, DC, USA, 1990. View at Google Scholar
  6. A. Engelbrecht, Fundamentals of Computational Swarm Intelligence, Wiley, New York, NY, USA, 2005.
  7. M. Clerc, Particle Swarm Optimization, ISTE, London, UK, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  8. M. R. AlRashidi and M. E. El-Hawary, “A survey of particle swarm optimization applications in electric power systems,” IEEE Transactions on Evolutionary Computation, vol. 13, no. 4, pp. 913–918, 2009. View at Publisher · View at Google Scholar · View at Scopus
  9. S. Coco, A. Laudani, G. Pollicino, G. Pulcini, F. Riganti Fulginei, and A. Salvini, “TWT magnetic focusing structure optimization by parallel evolutionary algorithm,” The International Journal for Computation and Mathematics in Electrical and Electronic Engineering, vol. 31, no. 5, pp. 1338–1346, 2012. View at Publisher · View at Google Scholar
  10. Y. Shi and R. Eberhart, “A modified particle swarm optimizer,” in Proceedings of IEEE International Conference on Evolutionary Computation (ICEC '98), pp. 69–73, May 1998. View at Scopus
  11. J. H. Seo, C. H. Im, S. Y. Kwak, C. G. Lee, and H. K. Jung, “An improved particle swarm optimization algorithm mimicking territorial dispute between groups for multimodal function optimization problems,” IEEE Transactions on Magnetics, vol. 44, no. 6, pp. 1046–1049, 2008. View at Publisher · View at Google Scholar · View at Scopus
  12. K. Parsopoulos and M. Vrahatis, “Modification of the particle swarm optimizer for locating all the global minima,” in Artificial Neural Networks and Genetic Algorithms, pp. 324–327, Springer, 2001. View at Google Scholar
  13. D. Bratton and J. Kennedy, “Defining a standard for particle swarm optimization,” in Proceedings of IEEE Swarm Intelligence Symposium (SIS '07), pp. 120–127, IEEE Press, April 2007. View at Publisher · View at Google Scholar · View at Scopus
  14. X. Chen and Y. Li, “A modified PSO structure resulting in high exploration ability with convergence guaranteed,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 37, no. 5, pp. 1271–1289, 2007. View at Publisher · View at Google Scholar · View at Scopus
  15. Z. H. Zhan, J. Zhang, Y. Li, and H. S. H. Chung, “Adaptive particle swarm optimization,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 39, no. 6, pp. 1362–1381, 2009. View at Publisher · View at Google Scholar · View at Scopus
  16. M. M. Ali and P. Kaelo, “Improved particle swarm algorithms for global optimization,” Applied Mathematics and Computation, vol. 196, no. 2, pp. 578–593, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  17. M. A. Montes de Oca, T. Stützle, M. Birattari, and M. Dorigo, “Frankenstein's PSO: a composite particle swarm optimization algorithm,” IEEE Transactions on Evolutionary Computation, vol. 13, no. 5, pp. 1120–1132, 2009. View at Publisher · View at Google Scholar · View at Scopus
  18. Z. Xinchao, “A perturbed particle swarm algorithm for numerical optimization,” Applied Soft Computing, vol. 10, no. 1, pp. 119–124, 2010. View at Publisher · View at Google Scholar · View at Scopus
  19. L. Liu, S. Yang, and D. Wang, “Particle swarm optimization with composite particles in dynamic environments,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 40, no. 6, pp. 1634–1648, 2010. View at Publisher · View at Google Scholar · View at Scopus
  20. R. Mendes, Population Toplogies and Their Influence in Particle Swarm Performance [Ph.D. thesis], Universidade do Minho, Braga, Portugal, 2004.
  21. R. Mendes, J. Kennedy, and J. Neves, “The fully informed particle swarm: simpler, maybe better,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 204–210, 2004. View at Publisher · View at Google Scholar · View at Scopus
  22. J. Kennedy and R. Mendes, “Neighborhood topologies in fully informed and best-of-neighborhood particle swarms,” IEEE Transactions on Systems, Man, and Cybernetics C, vol. 36, no. 4, pp. 515–519, 2006. View at Publisher · View at Google Scholar
  23. A. S. Mohais, R. Mendes, C. Ward, and C. Posthoff, “Neighborhood re-structuring in particle swarm optimization,” in Proceedings of the 18th Australian Joint Conference on Artificial Intelligence, vol. 3809 of Lecture Notes in Computer Science, p. 776, Sydney, Australia, December 2005.
  24. H. Liu, E. Howley, and J. Duggan, “Particle swarm optimisation with gradually increasing directed neighbourhoods,” in Proceedings of the 13th Annual Genetic and Evolutionary Computation Conference (GECCO '11), pp. 29–36, July 2011. View at Publisher · View at Google Scholar · View at Scopus
  25. M. A. Montes de Oca, T. Stützle, K. van den Enden, and M. Dorigo, “Incremental social learning in particle swarms,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 41, no. 2, pp. 368–384, 2011. View at Publisher · View at Google Scholar · View at Scopus
  26. X. Li, “Niching without niching parameters: particle swarm optimization using a ring topology,” IEEE Transactions on Evolutionary Computation, vol. 14, no. 1, pp. 150–169, 2010. View at Publisher · View at Google Scholar · View at Scopus
  27. F. Riganti Fulginei and A. Salvini, “Hysteresis model identification by the flock-of-starlings optimization,” International Journal of Applied Electromagnetics and Mechanics, vol. 30, no. 3-4, pp. 321–331, 2009. View at Publisher · View at Google Scholar · View at Scopus
  28. F. Riganti Fulginei and A. Salvini, “The flock-of starlings optimization: influence of topological rules on the collective behaviour of swarm intelligence,” in Computational Methods for the Innovative Design of Electrical Devices, Studies in Computational Intelligence, pp. 139–157, Springer, 2010. View at Google Scholar
  29. M. Ballerini, N. Cabibbo, R. Candelier et al., “Interaction ruling animal collective behavior depends on topological rather than metric distance: evidence from a field study,” Proceedings of the National Academy of Sciences of the United States of America, vol. 105, no. 4, pp. 1232–1237, 2008. View at Publisher · View at Google Scholar · View at Scopus
  30. J. L. Fernández-Martínez, E. García-Gonzalo, and J. P. Fernández-Alvarez, “Theoretical analysis of particle swarm trajectories through a mechanical analogy,” International Journal of Computational Intelligence Research, vol. 4, no. 2, pp. 93–104, 2008. View at Google Scholar
  31. H. M. Emara and H. A. Abdelfatah, “Continuous swarm optimization technique with stability analysis,” in Proceedings of the American Control Conference (AAC '04), vol. 3, pp. 2811–2817, July 2004. View at Publisher · View at Google Scholar · View at Scopus
  32. S. M. Mikki and A. A. Kishk, “Physical theory for particle swarm optimization,” Progress in Electromagnetics Research, vol. 75, pp. 171–207, 2007. View at Google Scholar · View at Scopus
  33. S. Coco, A. Laudani, F. Riganti Fulginei, and A. Salvini, “Accurate design of Helmholtz coils for ELF Bioelectromagnetic interaction by means of continuous FSO,” International Journal of Applied Electromagnetics and Mechanics, vol. 39, no. 1–4, pp. 651–656, 2012. View at Google Scholar
  34. A. Laudani, G. Pulcini, F. Riganti Fulginei, and A. Salvini, “Swarm circuits performing optimization and inverse problems,” in Proceedings of the International Workshops on Optimization and Inverse Problems in Electromagnetism (OIPE '12), Ghent, Belgium, September 2012.
  35. A. Laudani, F. Riganti Fulginei, and A. Salvini, “Closed forms for the fully-connected continuous flock-of-starlings optimization algorithm,” in Proceedings of the 15th International Conference on Computer Modeling and Simulation (UKSim '13), Cambridge, UK, April 2013.
  36. W. Li, “Stability analysis of swarms with general topology,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 38, no. 4, pp. 1084–1097, 2008. View at Publisher · View at Google Scholar
  37. V. Kadirkamanathan, K. Selvarajah, and P. J. Fleming, “Stability analysis of the particle dynamics in particle swarm optimizer,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 3, pp. 245–255, 2006. View at Publisher · View at Google Scholar · View at Scopus
  38. S. Kiranyaz, T. Ince, A. Yildirim, and M. Gabbouj, “Fractional particle swarm optimization in multidimensional search space,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 40, no. 2, pp. 298–319, 2010. View at Publisher · View at Google Scholar · View at Scopus
  39. CFSO—Continuous Flock-of-Starlings Optimization Algorithm tool, 2013, http://www.dea.uniroma3.it/elettrotecnica/.
  40. F. Riganti Fulginei, A. Salvini, and G. Pulcini, “Metric-topological-evolutionary optimization,” Inverse Problems in Science and Engineering, vol. 20, no. 1, pp. 41–58, 2012. View at Publisher · View at Google Scholar · View at Scopus
  41. S. Coco, A. Laudani, G. Pulcini, F. Riganti Fulginei, and A. Salvini, “Shape optimization of multistage depressed collectors by parallel evolutionary algorithm,” IEEE Transactions on Magnetics, vol. 48, no. 2, pp. 435–438, 2012. View at Publisher · View at Google Scholar · View at Scopus
  42. S. Coco, A. Laudani, F. Riganti Fulginei, and A. Salvini, “TEAM problem 22 approached by a hybrid artificial life method,” The International Journal for Computation and Mathematics in Electrical and Electronic Engineering, vol. 31, no. 3, pp. 816–826, 2012. View at Publisher · View at Google Scholar
  43. C. MacNish, “Towards unbiased benchmarking of evolutionary and hybrid algorithms for real-valued optimisation,” Connection Science, vol. 19, no. 4, pp. 361–385, 2007. View at Publisher · View at Google Scholar · View at Scopus
  44. K. Tang, X. Yao, P. N. Suganthan et al., “Benchmark functions for the CEC'2008 special session and competition on large scale global optimization,” Tech. Rep., Nature Inspired Computation and Applications Laboratory, USTC, China, 2007. View at Google Scholar
  45. M. Gneo, M. Schmid, S. Conforto, and T. D'Alessio, “A free geometry model-independent neural eye-gaze tracking system,” Journal of NeuroEngineering and Rehabilitation, vol. 9, article 82, 2012. View at Publisher · View at Google Scholar
  46. I. Bernabucci, S. Conforto, M. Capozza, N. Accornero, M. Schmid, and T. D'Alessio, “A biologically inspired neural network controller for ballistic arm movements,” Journal of NeuroEngineering and Rehabilitation, vol. 4, article 33, 2007. View at Publisher · View at Google Scholar · View at Scopus