Research Article  Open Access
Hybrid PSOSA Type Algorithms for Multimodal Function Optimization and Reducing Energy Consumption in Embedded Systems
Abstract
The paper presents a novel hybrid evolutionary algorithm that combines Particle Swarm Optimization (PSO) and Simulated Annealing (SA) algorithms. When a local optimal solution is reached with PSO, all particles gather around it, and escaping from this local optima becomes difficult. To avoid premature convergence of PSO, we present a new hybrid evolutionary algorithm, called HPSOSA, based on the idea that PSO ensures fast convergence, while SA brings the search out of local optima because of its strong localsearch ability. The proposed HPSOSA algorithm is validated on ten standard benchmark multimodal functions for which we obtained significant improvements. The results are compared with these obtained by existing hybrid PSOSA algorithms. In this paper, we provide also two versions of HPSOSA (sequential and distributed) for minimizing the energy consumption in embedded systems memories. The two versions, of HPSOSA, reduce the energy consumption in memories from 76% up to 98% as compared to Tabu Search (TS). Moreover, the distributed version of HPSOSA provides execution time saving of about 73% up to 84% on a cluster of 4 PCs.
1. Introduction
Several optimization algorithms have been developed over last few decades for solving realworld optimization problems. Among them, we have many heuristics like Simulated Annealing (SA) [1] and optimization algorithms that make use of social or evolutionary behaviors like Particle Swarm Optimization (PSO) [2, 3]. SA and PSO are quite popular heuristics for solving complex optimization problems, but they have some strengths and limitations.
Particle Swarm Optimization (PSO) is based on the social behavior of individuals living together in groups. Each individual tries to improve itself by observing other group members and imitating the better ones. This way, the group members are performing an optimization procedure which is described in [3]. The performance of the algorithm depends on how the particles (i.e., potential solutions to an optimization problem) move in the search space, given that the velocity is updated iteratively. Large research body is therefore devoted to the analysis and proposal of different motion rules (see [4–6] for recent accounts of PSO research). To avoid premature convergence of PSO, we combine it with SA: PSO contributes to the hybrid approach in a way to ensure that the search converges faster, while SA makes the search jump out of local optima due to its strong localsearch ability. In this paper, we present a hybrid optimization algorithm, called HPSOSA, which exploits intuitively the positive features of PSO and SA. We also validate HPSOSA using ten benchmark functions given in [7] and compare the results with classical PSO, ATREPSO, QIPSO, and GMPSO algorithms described in [2], TLPSO [8], PSOSA [9], SAPSO and SUPERSAPSO presented in [10]. We provide also two versions of HPSOSA (sequential and distributed) for minimizing the energy consumption in embedded systems memories. The two versions, of HPSOSA, reduce the energy consumption in memories from 76% up to 98% as compared to Tabu Search (TS). Moreover, the distributed version of HPSOSA provides execution time saving of about 73% up to 84% on a cluster of 4 PCs.
The rest of the paper is organized as follows. Section 2 introduces, briefly, PSO and SA algorithms. Section 3 is devoted to detailed description of HPSOSA. In Section 4, benchmark functions are applied on HPSOSA. In Section 5, HPSOSA is used to solve the energy consumption problem in memory. In addition, simulation results are provided and compared with those of [11]. Conclusions and further research aspects are given in Section 6.
2. Background
2.1. Simulated Annealing Algorithm
SA [12] is a probabilistic variant of the local search method, which can, in contrast to PSO, escape local optima. SA is based on an analogy taken from thermodynamics which is as follows: In order to grow a crystal, we start by heating material until it reaches its molten state. Then, we reduce the temperature of this crystal melt gradually, until the crystal structure is formed. A standard SA procedure begins by generating an initial solution at random. At initial stages, a small random change is made in the current solution . Then the objective function value of the new solution is calculated and compared with that of the current solution. A move is made to the new solution if it has better value or if the probability function implemented in SA has a higher value than a randomly generated number. Otherwise a new solution is generated and evaluated. The probability of accepting a new solution is given as follows:
The calculation of this probability relies on a parameter , which is referred to as temperature, since it plays a similar role as the temperature in the physical annealing process. To avoid getting trapped at a local minimum point, the rate of reduction should be slow. In our problem we use the following method to reduce the temperature: and where .
Thus, at the start of SA most worsening moves may be accepted, but in the end only improving ones are likely to be allowed, which can help the procedure jump out of a local minimum. The algorithm may be terminated after a certain volume fraction of the structure has been reached or after a prespecified runtime.
2.2. Particle Swarm Optimization
PSO is a population based stochastic optimization technique developed by [13], inspired by social behavior patterns of organisms that live and interact within large groups. In particular, it incorporates swarming behaviors observed in flocks of birds, schools of fish, or swarms of bees, and even human social behavior.
PSO algorithm is based on an idea that particles move through the search space with velocities that are dynamically adjusted according to their historical behaviors. Therefore, the particles have the tendency to move towards the better and better search area over the course of search process. PSO algorithm starts with a group of random (or not) particles (solutions) and then searches for optima by updating each generation. Each particle is treated as a volumeless particle (a point) in the dimensional search space. The th particle is represented as . At each generation, the particles are updated by using following two values.(i)The first value is the best solution (fitness) a particle has achieved so far (the fitness value is also stored). This value is called .(ii)The second value is the best value tracked by the particle swarm optimizer so far (by any particle) in the population. This best value is a global best and is called . When a particle takes part of a population as its topological neighbors, the best value is a local best and is called .
At each iteration, these two best values are combined to adjust the velocity along each dimension, which is then used to compute a new iteration step for the particle. A portion of adjustment to the velocity is influenced by the individual's previous best position (), considered as the cognition component, and this portion is influenced by the best in the neighborhood ( or ), the social component (see Figure 1). With the addition of the inertia factor, , by [14] (for balancing the global and the local search), the equations for velocity adjustment are where is a random number independently generated within the range of and and are two learning factors which control the influence of the social and cognitive components (usually, = = 2, see [15]).
In (3), if the sum on the right side exceeds a constant value, then the velocity on that dimension is assigned to be or . Thus, particle velocities are clamped to the range of , which serves as a constraint to control the global exploration ability of PSO algorithm. This also reduces the likelihood of particles leaving the search space. Note that the values of are not restricted to the range ; it only limits the maximum distance that a particle will move during one iteration.
3. HPSOSA Hybrid Algorithm
This section presents a new hybrid HPSOSA algorithm which combines the advantages of both PSO (that has a strong globalsearch ability) and SA (that has a strong localsearch ability). Other applications of hybrid PSO and SA algorithm can be found [9, 10, 16–19].
This hybrid approach makes full use of the exploration capability of both PSO and SA and offsets the weaknesses of each. Consequently, through application of SA to PSO, the proposed algorithm is capable of escaping from a local optimum. However, if SA is applied to PSO at each iteration, the computational cost will increase sharply and at the same time the fast convergence ability of PSO may be weakened. In order to flexibly integrate PSO with SA, SA is applied to PSO every iterations if no improvement of the global best solution does occur. Therefore, the hybrid HPSOSA approach is able to keep fast convergence (most of the time) thanks to PSO, and to escape from a local optimum with the aid of SA. In order to allow PSO jump out of a local optimum, SA is applied to the best solution in the swarm found so far, each iterations that is predefined to be (based on our experimentations).
The hybrid HPSOSA algorithm works as illustrated in Algorithm 1, where one has the following.(i)Description of a Particle. Each particle (solution) is represented by its components, that is, , where and represents the dimension of the optimization problem to solve.(ii)Initial Swarm. Initial Swarm corresponds to population of particles that will evolve. Each particle is initialized with uniform random value between the lower and upper boundaries of the interval defining the optimization problem.(iii)Evaluate Function. Evaluate (or fitness) function in HPSOSA algorithm is typically the objective function that we want to minimize in the problem. It serves for each solution to be tested for suitability to the environment under consideration.(iv)SA Algorithm. If no improvement of the global best solution occur during the last iterations, then it means that the algorithm is trapped in a local optimum point. To escape out from local optimum, we apply SA algorithm to global best solution. The performance of SA depends on the definition of the several control parameters. (a)Initial Temperature. Kirkpatrick [20] suggested that a suitable initial temperature is one that results in an average probability of a solution that increases being accepted of about 0.8. The value of will clearly depend on the scaling of and, hence, be problemspecific. It can be estimated by conducting an initial search (100 iterations in next simulations) in which all increases in are accepted and calculating the average objective increase observed . is then given by (b)Accept Function. Function Accept (current_solution, Neighbor, ) is decided by the acceptance probability given by (1), which is the probability of accepting configuration .(c)Generate Function. The neighborhood of each solution is generated by using the following Equation: where is the direction of the new neighborhood and takes either 1 or −1, is random number with Gaussian (0,1) distribution and is a constant that correspond to the radius of neighborhood generator.(d)SA_Stop_Criterion. The stopping criterion of SA algorithm defines when the system has reached 3000 function evaluations or maximum number of functions evaluations or are not attained.(e)Decrementing the Temperature. The most commonly used temperature reducing function is geometric (see (2)). In next simulations .(f)InnerLoop. The length of each temperature level determines the number of solutions generated at each temperature, .

4. Experiments Results
4.1. Benchmark Functions
In order to compare the performance of HPSOSA hybrid algorithm with those described in [2, 8–10], we use benchmark functions [7] described in Table 1. These functions provide a good starting point for testing the credibility of an optimization algorithm. For each of these functions, there are many local optima in their solution spaces. The number of local optima increases with increasing complexity of the functions, that is, with increasing dimension. In the following experiments, we used 10, 20 and 30dimensional functions except in the case of Himmelblau and Shubert functions that are twodimensional by definition (see Figure 2 for 3D representation).

(a) Rastrigin
(b) Sphere
(c) Griewank
(d) Rosenbrock
(e) Schwefel
(f) Ackley
(g) Michalewicz
(h) Himmelblau
(i) Shubert
4.2. Simulation Results and Discussions
To verify the efficiency and effectiveness of HPSOSA hybrid algorithm, the experimental results of HPSOSA approach are compared with those obtained by [2, 8–10]. Our HPSOSA hybrid algorithm is written in C++ and was compiled using gcc version 2.95.2 (Devcpp) on a laptop with Windows Vista x64 Premium Home Edition running Intel Core 2 Quad (Q9000) at 2 GHz and having 4 Gb memory.
4.2.1. Comparison with Results Obtained by Using TLPSO Algorithm [8]
In this section we compare HPSOSA approach with TLPSO method [8] that is based on combining the excellence of both PSO and Tabu Search. As described in [8], we apply HPSOSA algorithm to the four following benchmark problems: Rastrigin, Schwefel, Griewank and Rosenbrock function. Here, the number of particles in the swarm is 30. The number of dimension of searching and the number of objective function evaluations is 60000 (i.e., ). The results obtained after numerical simulations are shown in Table 2. These results indicate the Mean, Best, and Worst values obtained under the same condition over 50 trials. By analyzing Table 2, we conclude that the results obtained by HPSOSA algorithm are preferable in comparison with those obtained by TLPSO algorithm.

4.2.2. Comparison with Other PSO Algorithms Described in [2]
Performance of four Particle Swarm Optimization algorithms, namely classical PSO, AttractionRepulsion based PSO (ATREPSO), Quadratic Interpolation based PSO (QIPSO) and Gaussian Mutation based PSO (GMPSO) is evaluated in [2]. The algorithms presented in this paper are guided by the diversity of the population to search the global optimal solution of a given optimization problem, where as GMPSO uses the concept of mutation and QIPSO uses the reproduction operator to generate a new member of the swarm.
In order to make a fair comparison between classical PSO, ATREPSO, QIPSO, GMPSO and HPSOSA approach, we fixed, as indicated in [2], the same seed for random number generation so that the initial swarm population is same for all five algorithms. The number of particles in the swarm is 30. The algorithms use a linearly decreasing inertia weight which starts at 0.9 and ends at 0.4, with the user defined parameters . For each algorithm, the number of objective function evaluations is 300000. A total of 30 runs for each experimental setting were conducted and the average fitness of the best solutions throughout the run was recorded. The mean solution and the standard deviation (note that the standard deviation indicates the stability of the algorithms), found by the five algorithms, is listed in Table 3. The numerical results given in Table 3 show the following. (i)All the algorithms outperform the classical Particle Swarm Optimization. (ii)HPSOSA algorithm gives much better performances in comparison to PSO, QIPSO, ATREPSO, and GMPSO, out of the Sphere’s and Ackley’s functions. (iii)On Sphere’s function, QIPSO obtains better results than those obtained by HPSOSA approach. But when the maximum number of iterations is fixed to , HPSOSA obtains the optimal value. (iv)The analysis of the results, obtained for Ackley's function, shows that QIPSO obtains better mean result than HPSOSA algorithm. However, HPSOSA has a much smaller standard deviation.

4.2.3. Comparison with Other PSO Algorithms Described in [10]
In this section four benchmark functions are used to compare the relative performance of HPSOSA algorithm with SUPERSAPSO, SAPSO, and PSO algorithms described in [10].
For all comparisons, the number of particles was set to 30. HPSOSA algorithm uses a linearly decreasing inertia weight which starts at 0.9 and ends at 0.4, with the user defined parameters , 20 runs are conducted for each experimental setting and, for each algorithm, the average value is given in Table 4.

In all above experiments, HPSOSA algorithm obtains better results in comparison to those obtained by both the standard PSO and SAPSO algorithm [10]. A comparison of HPSOSA algorithm and SUPERSAPSO [10], shows that the last one converges faster than HPSOSA.
SUPERSAPSO uses an expression for the particle movements ( where ) which is welladapted to the case where the global optimum is 0. This is the reason why SUPERSAPSO needs a very small number of iterations in this case.
4.2.4. Comparison with PSOSA Algorithm Described in [9]
In this section performances of HPSOSA are compared with these of PSOSA [9], Genetic Algorithm and hybrid algorithm [21].
Table 5 lists different results obtained for three different dimensions of each function. The optimum value of Sphere, Rastrigrin and Griewank was set to be and the goal value of Rosenbrock function was set to be (as indicated in [9]).

To make a fair comparison, the maximum number of function evaluations allowed was set to 20000, 30000 and 40000 for HPSOSA and PSOSA algorithms when the number of particle was set to 20. HPSOSA algorithm uses a linearly decreasing inertia weight which starts at 0.9 and ends at 0.4, with the user defined parameters .
The numerical results given in Table 5 show that: (i)Over four benchmark functions, HPSOSA and PSOSA do better than standard GA and hybrid algorithm [21]. (ii)For Sphere, Rastrigin and Griewank functions, HPSOSA and PSOSA algorithms obtain optimal solutions within specified constrains (number of objective function evaluations). (iii)For Rosenbrock function, PSOSA obtains better results than HPSOSA for dimension 20, but for dimensions 10 and 30, HPSOSA does better and has smaller standard deviation.
5. Reducing Memory Energy Consumption in Embedded Systems
5.1. Description of the Memory Problem
According to trends in [22], memory will become the major energy consumer in an embedded system. Indeed, embedded systems must integrate multiple complex functionalities which needs bigger battery and memory. Hence, reducing memory energy consumption of these systems has never been as topical. In this paper, we will focus on software techniques for the memory management. In order to reduce memory energy consumption, most authors rely on ScratchPad Memories (SPMs) rather than caches [23]. Although cache memory helps a lot with program speed, it is not the appropriate for most of the embedded systems. In fact, cache increases the system size and its energy cost (cache area plus managing logic). Like cache, SPM consists of small, fast SRAM. The main difference is that SPM is directly and explicitly managed at the software level, either by the developer or by the compiler which makes it more predictable. SPM requires up to 40% less energy and 34% less area than cache [24]. In this paper, we will therefore use an SPM in our memory architecture. Due to the reduced SPM size, we allocate space for interesting data only whereas, the remaining is placed in main memory (DRAM). In order to determine interesting data, we use data profiling to gather memory access frequency information. The Tabu Search (TS) approach consists of allocating space for data in SPM based on TS principles [25]. More details about how TS is implemented can be found in [11].
In order to compute energy cost of the system, we propose an energy consumption estimation model, for our memory architecture composed by an SPM, an instruction cache and a DRAM. Equation (7) gives the energy model where the three terms refer to the total energy consumed, respectively, in SPM, in instruction cache and in DRAM.
In this model, we distinguish between the two cache write policies: WriteThrough (WT) and WriteBack (WB). In a WT cache, every write to the cache causes a synchronous write to DRAM. Alternatively, in a WB cache, writes are not immediately mirrored to DRAM. Instead, the cache tracks which of its locations have been written over and then, it marks these locations as dirty. The data in these locations is written back to DRAM when that data is evicted from the cache [26]. In this paper, the aim is to minimize the energy for the detailed estimation model presented as follows:
Equations (8) and (9) represent, respectively, the total energy consumed during reading and writing from/to SPM. Equations (10) and (11) represent, respectively, the total energy consumed during reading and writing from/to instruction cache. When, Equations (12) and (13) represent, respectively, the total energy consumed during reading and writing from/to DRAM. The various terms used in this energy model are explained in Table 6.

As SPM has got a lot of advantages, it is clearly preferable to put as much data as possible in it. In other words, we must maximize terms and in the model. Hence, the problem becomes to maximize the number of accesses to the SPM. It is therefore a combinatorial optimization problem like knapsack problem [27]. We want to fill SPM that can hold a maximum capacity of with some combination of data from a list of possible data each with and so that the access number of the data allocated into SPM is maximized. This problem has a single linear constraint, a linear objective function which sums the sizes of the data allocated into SPM, and the added restriction that each data will be in the SPM or not. If is the total number of data, then a solution is just a finite sequence of terms such that is either 0 or the size of the data. if and only if the data is not selected in the solution. This solution must satisfy the constraint of not exceeding the maximum SPM capacity (i.e., ).
5.2. Discrete Sequential Hybrid HPSOSA Algorithm
This section should be considered as an attempt to use hybrid evolutionary algorithms for reducing energy consumption in embedded systems. Here, the focus is on the use of HPSOSA algorithm designed in previous sections. Since the problem under consideration is dicrete and has specific features, HPSOSA needs changes.
(Particle)
A solution can be represented by an array having size equal to the number of the data. Each element from this array denotes whether a data is included in the SPM (“1”) or not (“0”). The HPSOSA algorithm starts with an initial swarm which is randomly initialized.
Function
It is the objective function that we want to minimize in the problem. It serves for each solution to be tested for suitability to the environment under consideration
Position Update Equation
Each dimension of the particle is updated by using (15):
where is the sigmoid function, used to scale the velocities between 0 and 1, defined as:
Generate Function
SA uses a notion of neighborhood relation. Let be the set of all feasible solutions to the problem and the objective function to be minimized. A neighborhood relation is a binary relation with some desired properties. The interpretation of is that solution is a neighbor of solution in the search space of all solutions . A neighbor heuristic proceeds in steps. Starting search at some initial solution and then each step moves from the current solution to some neighbor according to rules specific to the heuristic. At each iteration, SA algorithm generates a random neighbor of the (line 10). The neighborhood relation is defined as follows:(1)with probability equal to 0.03, the value of each element of is flipped from 1 to 0 or from 0 to 1;(2)validate solution: while Not feasible must satisfy the constraint of not exceeding the maximum SPM capacity.) () do Remove the data having a low number of access from : ().
Accept Function
the key idea in the SA approach is the function which specifies the probability of accepting the move from to a solution, which also depends on so called temperature (). The function should satisfy the following conditions: (1) if solution is better than in terms of the cost function : (i.e., in a minimization problem);(2)if is worse than the value of is positive (i.e., it allows for moving to a worse solution), but decreases with ;(3)for fixed and , when is worse than the value of decreases with time and tends to 0.The function is decided by the probability of accepting configuration . This probability is given by the following formula:
where is the temperature and is a random number independently generated within the range of .
5.3. Discrete Cooperative Distributed Hybrid HPSOSA Algorithm
For Distributed hybrid HPSOSA (HPSOSA_Dist) algorithm, we use independent subswarms of particles with their own fitness functions which evolve in isolation, except for an exchange of some particles (migration). A set of particles is assigned to each of the processors, for a total population size of . The set assigned to each processor is its subswarm. The processors are connected by an interconnection network with a ring topology. Initial subswarms consist of a randomly constructed assignment created at each processor. Each processor, disjointly and in parallel, executes the HPSOSA_Seq algorithm on its subswarm for a certain number of generations. Afterwards, each subswarm exchanges ( runs in Asynchronous mode: at the iteration, each processor sends its best solution and continues the improvement of its subswarm and verifies if it does not receive a solution from its neighbor.) its best particle (migrant) with its neighbors. We exchange the particles themselves (i.e., the migrant is removed from one subswarm and added to another). Hence, the size of the subswarm remains the same after migration (the worst particle is removed). The process continues with the separate improvement of each current solution for a maximum number of iterations. At the end of the process the best solution that exists constitutes the final assignment.
5.4. Experimental Results
In order to compute the energy cost of studied memory architecture composed by an SPM, an instruction cache and a DRAM, we proposed an energy consumption estimation model which is explained in [11]. Hybrid HPSOSA algorithms and TS have been implemented on a cluster of PCs running under Windows XP Professional version 2002. The cluster is composed by 4 Pentium (D) machines running at 3 GHz. Each processor has 1 Gbyte of memory. Table 7 gives a description of the benchmarks used and they also can be downloaded from [28].

In experiments, 30 different executions for each heuristic are performed and the best and average results obtained on these 30 executions are recorded. In this case, the best and the average solutions give similar results. Figure 3 shows that both HPSOSA_Seq and HPSOSA_Dist achieve better performances than TS on energy savings. In fact, hybrid HPSOSA heuristics consume from 76.23% (StatemateCE) to 98.92% (ShaCE) less energy than TS.
As HPSOSA_Seq and HPSOSA_Dist give similar results, we decide to experiment their behavior when considering their execution time. We recorded the average execution times needed by HPSOSA_Seq and HPSOSA_Dist (running on a cluster of 4 PCs) to achieve the 30 executions. Figure 4 presents the results obtained on the largest (size) benchmarks. From this figure, we see that the Distributed HPSOSA version (HPSOSA_Dist) is faster than the Sequential HPSOSA version (HPSOSA_Seq). In fact, HPSOSA_Dist requires 73.16% (AdpcmCE) to 84.65% (CntCE) less execution time than HPSOSA_Seq. The Distributed HPSOSA version is always faster than the Sequential HPSOSA version.
6. Conclusion and Perspectives
In this paper, we have designed a hybrid algorithm (HPSOSA) that combines the exploration ability of PSO with the exploitation ability of SA, and is capable of preventing premature convergence. Compared with QIPSO, ATREPSO and GMPSO [2], TlPSO [8], PSOSA [9] and SUPERSAPSO [10] on wellknown benchmark functions and for the problem of reducing energy consumption in embedded systems memories, it has been shown that HPSOSA performs well in terms of accuracy, convergence rate, stability and robustness. In future, we will also compare the performances of HPSOSA with the above mentioned algorithms on the embedded systems memory saving problem.
In addition, we will compare HPSOSA algorithm with other hybrid algorithms (PSOGA, PSOMDP, PSOTS) whose design is in progress by the authors. Comparison will also be done on additional benchmark functions and more complex problems including functions with dimensionality larger than 30.
Acknowledgments
The authors are grateful to anonymous referees for their pertinent comments and suggestions. Dawood Khan helped the authors with the intricaties of the english language. The work of M. Idrissi Aouad is supported by the French national research agency (ANR) in the Future Architectures program.
References
 M. Locatelli, “Simulated annealing algorithms for continuous global optimization,” in Handbook of Global Optimization, P. M. Pardalos and H. E. Romeijn, Eds., vol. 2, pp. 179–230, Kluwer Academic, 2001. View at: Google Scholar
 M. Pant, R. Thangaraj, and A. Abraham, “Particle swarm based metaheuristics for function optimization and engineering applications,” in Proceedings of the 7th Computer Information Systems and Industrial Management Applications (CISIM '08), vol. 7, pp. 84–90, IEEE Computer Society, Washington, DC, USA, 2008. View at: Google Scholar
 J. Kennedy and C. E. Russell, “Swarm intelligence,” in Morgan Kaufmann, Academic Press, 2001. View at: Google Scholar
 A. M. Marco, T. Stützle et al., “Convergence behavior of the fully informed particle swarm optimization algorithm,” in Proceedings of the 10th Annual Conference on Genetic and Evolutionary Computation (GECCO '08), pp. 71–78, 2008. View at: Google Scholar
 A. P. Engelbrecht, Fundamentals of Computational Swarm Intelligence, John Wiley & Sons, 2005.
 R. Poli, J. Kennedy, and T. Blackwell, “Particle swarm optimization. An overview,” in Swarm Intelligence, vol. 1, pp. 33–57, 2007. View at: Google Scholar
 P. N. Suganthan, N. Hansen, J. J. Liang et al., “Problem definitions and evaluation criteria for the cec 2005 special session on realparameter optimization,” Tech. Rep. 2005005, Nanyang Technological University, Singapore; IIT Kanpur, India, 2005. View at: Google Scholar
 S. Nakano, A. Ishigame, and K. Yasuda, “Consideration of particle swarm optimization combined with tabu search,” IEEJ Transactions on Electronics, Information and Systems, vol. 128, pp. 1162–1167, 2008, Special Issue on ‘The Electronics, Information and Systems Conference Electronics, Information and Systems Society, I.E.E. of Japan’. View at: Google Scholar
 G. Yang, D. Chen, and G. Zhou, “A new hybrid algorithm of particle swarm optimization,” in Lecture Notes in Computer Science, vol. 4115, pp. 50–60, 2006. View at: Google Scholar
 M. Bahrepour, E. Mahdipour, R. Cheloi, and M. Yaghoobi, “Supersapso: a new sabased pso algorithm,” in Applications of Soft Computing, vol. 58, pp. 423–430, 2009. View at: Google Scholar
 M. I. Aouad, R. Schott, and O. Zendra, “A tabu search heuristic for scratchpad memory management,” in Proceedings of the International Conference on Software Engineering and Technology (ICSET '10), vol. 64, pp. 386–390, WASET, Rome, Italy, 2010. View at: Google Scholar
 S. Kirkpatrick, C. D. Gelatt Jr., and M. P. Vecchi, “Optimization by simulated annealing,” Science, vol. 220, no. 4598, pp. 671–680, 1983. View at: Google Scholar
 J. Kennedy and R. C. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, pp. 1942–1948, IEEE Computer Society, 1995. View at: Google Scholar
 Y. Shi and R. C. Eberhart, “A modified particle swarm optimizer,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '98), pp. 69–73, IEEE Computer Society, 1998. View at: Google Scholar
 Y. Shi and R. C. Eberhart, “Empirical study of particle swarm optimization,” in Proceedings of the Congress on Evolutionary Computation (CEC '99), pp. 1945–1950, 1999. View at: Google Scholar
 W. J. Xia and Z. M. Wu, “A hybrid particle swarm optimization approach for the jobshop scheduling problem,” International Journal of Advanced Manufacturing Technology, vol. 29, no. 34, pp. 360–366, 2006. View at: Publisher Site  Google Scholar
 D. Chaojun and Z. Qiu, “Particle swarm optimization algorithm based on the idea of simulated annealing,” International Journal of Computer Science and Network Security, vol. 6, no. 10, pp. 152–157, 2006. View at: Google Scholar
 L. Fang, P. Chen, and S. Liu, “Particle swarm optimization with simulated annealing for tsp,” in Proceedings of the 6th WSEAS International Conference on Artificial Intelligence, Knowledge Engineering and Data Bases (AIKED '07), pp. 206–210, World Scientific and Engineering Academy and Society (WSEAS), Stevens Point, Wis, USA, 2007. View at: Google Scholar
 X. Wang and J. Li, “Hybrid particle swarm optimization with simulated annealing,” in Proceedings of the 3rd International Conference on Machine Learning and Cybernetics (ICMLC '04), vol. 4, pp. 2402–2405, 2004. View at: Google Scholar
 S. Kirkpatrick, “Optimization by simulated annealing: quantitative studies,” Journal of Statistical Physics, vol. 34, no. 56, pp. 975–986, 1984. View at: Publisher Site  Google Scholar
 L. Morten, K. R. Thomas, and T. Krink, Hybrid Particle Swarm Optimization with Breeding and Subpopulations, Springer, Berlin, Germany, 2000.
 “ITRS, System Drivers,” 2007, http://www.itrs.net/Links/2007ITRS/2007_Chapters/2007_SystemDrivers.pdf. View at: Google Scholar
 M. I. Aouad and O. Zendra, “A survey of scratchpad memory management techniques for lowpower and energy,” in Proceedings of the 2nd ECOOP Workshop on Implementation, Compilation, Optimization of ObjectOriented Languages, Programs and Systems (ICOOOLPS '07), pp. 31–38, Berlin, Germany, 2007. View at: Google Scholar
 H. B. Fradj, A. El Ouardighi, C. Belleudy, and M. Auguin, “Energy aware memory architecture configuration,” SIGARCH Computer Architecture News, vol. 33, no. 3, pp. 3–9, 2005. View at: Google Scholar
 M. Gendreau, An Introduction to Tabu Search, vol. 57, Kluwer Academic, Boston, Mass, USA, 2003.
 A. Tanenbaum, Architecture de l'ordinateur, 5th edition, 2005.
 U. P. H. Kellerer and D. Pisinger, Knapsack Problems, Springer, Berlin, Germany, 2004.
 “Benchmarks,” http://www.loria.fr/~idrissma/benchs.zip. View at: Google Scholar
Copyright
Copyright © 2011 Lhassane Idoumghar et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.