Research Article  Open Access
TwoDimensional IIR Filter Design Using Simulated Annealing Based Particle Swarm Optimization
Abstract
We present a novel hybrid algorithm based on particle swarm optimization (PSO) and simulated annealing (SA) for the design of twodimensional recursive digital filters. The proposed method, known as SAPSO, integrates the global search ability of PSO with the local search ability of SA and offsets the weakness of each other. The acceptance criterion of Metropolis is included in the basic algorithm of PSO to increase the swarm’s diversity by accepting sometimes weaker solutions also. The experimental results reveal that the performance of the optimal filter designed by the proposed SAPSO method is improved. Further, the convergence behavior as well as optimization accuracy of proposed method has been improved significantly and computational time is also reduced. In addition, the proposed SAPSO method also produces the best optimal solution with lower mean and variance which indicates that the algorithm can be used more efficiently in realizing twodimensional digital filters.
1. Introduction
Design of twodimensional (2D) filters has been considered extensively over the past two decades as it plays a very significant role in the domain of biomedical image processing, satellite imaging, seismic data processing, and so forth [1]. It is well known that digital filters are typically classified into two groups: recursive or infinite impulse response (IIR) and nonrecursive or finite impulse response (FIR) filters. Design of IIR filters has received much more attention, because IIR filters can provide a better performance than FIR filters, having the same number of filter coefficients. But the main problems of IIR filters are that they have a multimodal error surface and they may also be unstable in some cases. To overcome the problem of multimodal error surface, a global optimization method can be adopted more efficiently. The stability problem can be tackled by limiting the problem space as appropriate constraints from the beginning of optimization routine [1, 2]. Similar to 1D filter, 2D IIR filters can also meet the same desired specifications with less number of coefficients than required for an equivalent 2D FIR filter. The design methodology for 2D filters grouped in two ways: McClellan transformation based design of 2D filters from 1D prototype and another one based on appropriate optimization techniques [1–6]. In optimization based methods, the design problem can be formulated as a constrained minimization problem and they are solved by various global optimization techniques. Previously reported work on this problem has applied different optimization techniques, like neural network (NN) [1], genetic algorithm (GA) [2], computer language GENETICA [3], Taguchibased immune algorithm [4], Bees algorithm [5], and particle swarm optimization (PSO) [6], efficiently. Most of these algorithms exhibit slow convergence to achieve a good nearoptimum solution and are easily trapped into local optima. These shortcomings can be avoided by introducing SA with PSO because the first one has strong local exploration capabilities and PSO exhibits fast global searching abilities.
Earlier reported works demonstrated that by combining SA with PSO, significant improvements are achieved for different engineering problems, like partner selection in virtual enterprise [7], job shop scheduling problem [8], holon task allocation scheme [9], energy consumption reduction in embedded systems [10], and numerical verification of several benchmark functions [11]. Zhao et al. [7, 9] and Jamili et al. [8] combined SA with PSO for solving different practical problems, where SA starts with the global best solution generated by PSO. These hybrid algorithms [7–9] can be easily converted to basic PSO by ignoring the SA subroutine, and, at the same time, it can be used as a conventional SA by assuming population size to be one particle. It is demonstrated in [7–9] that the hybrid method outperforms conventional GA, SA, and PSO not only by providing quality solution but also by exhibiting greater convergence speed. Idoumghar et al. [10] and Shieh et al. [11] applied this hybrid algorithm for solving several benchmark functions and also for reducing energy consumption in embedded system memories. It is proved that [10] the hybrid SAPSO outperforms most of the recently introduced PSO based methods, like QIPSO (quadratic interpolation based PSO), GMPSO (Gaussian mutation based PSO), and so forth, in terms of accuracy, robustness, and convergence speed. Motivated by the improved features of hybrid SAPSO, here the design problem of 2D IIR filter is implemented more efficiently. Simulation results using this approach ensure better quality solutions with less computational time. The novelty of the proposed work is as follows: (i) new hybrid algorithm for designing 2D recursive filters, (ii) the effective application of Metropolis criterion to escape from local optimum, and (iii) adaptive simulated annealing (ASA) for better control of convergence speed and accuracy.
The rest of the paper is organized as follows. The design problem of 2D IIR filters and the stability condition are presented in Section 2. Section 3 introduces the preliminary concept of PSO. The proposed design method using hybrid SAPSO is described in Section 4. The simulation results indicating the comparisons with different design examples are shown in Section 5. Finally, conclusions and future work are given in Section 6.
2. Problem Formulation of 2D IIR Filter
Here we consider the design problem of th order, 2D IIR filter with transfer function [6]: where are filter coefficients. Let the frequencies , , , and . The design task of 2D filters is to find a transfer function as in (1), such that the magnitude function approximates the desired amplitude response in some optimal sense. This approximation can be achieved by minimizing , where Here , , and is a positive integer (e.g., , or 8). Hence, the primary aim is to minimize the difference between actual and desired amplitude response of the filter at gird points. Since denominator consists of only 1st degree factors, the stability condition is given by Therefore, the design task of 2D recursive filters reduces to the following constrained minimization problem: minimize satisfying the constraints as given in (3).
Assuming that , the transfer function of secondorder 2D filter can be written as follows: Now substitute and in (4), then can be further written as follows:
From (5), can be written as follows: Hence the corresponding magnitude response of 2D filters can be compactly written as follows: To compare the performance of proposed SAPSO method, the similar design specifications have been used for desired amplitude response [1–6]: Consequently, the desired amplitude response of 2D filter satisfying (8) is depicted in Figure 1.
3. An Overview of Particle Swarm Optimization
Particle swarm optimization (PSO) is a population based evolutionary algorithm which can mimic biological mechanisms, like bird flocking or fish schooling [12]. During the evolutionary process, the swarm particles are attracted towards the location of the best fitness of the particle itself (local best) and the location of the best fitness achieved by the whole population (global best). The position and velocity of th particle can be represented as follows: and . Particle velocities on each dimension are limited to which controls the local and global exploration capability of a particle. The best position of each particle is assumed as and the best fittest particle in the group is . Then, the new velocities and positions for next evaluations are calculated by [12–14] where and are cognitive and social acceleration control parameters and and are two random numbers . One important variant of standard PSO is the constriction factor based PSO (CPSO), which was proposed by Clerc and Kennedy [14]. The CPSO can generate higherquality solutions than the standard PSO with inertia weight and it guarantees the convergence of the search procedure. In CPSO, the particle velocity is updated by following the equation: where is called constriction factor, given by , .
It is well known that in PSO process, inertia weight plays an important role for balancing between explorationexploitation tradeoff. In [15, 16], it has been demonstrated that chaotic inertia weight based PSO (PSOCIW) [17] is the best strategy for better accuracy whereas PSO with random inertia weight (PSORANDIW) produces optimal solutions with better efficiency. Therefore, these two best performed inertia weight strategies combined with time varying inertia weight (PSOTVIW) are considered here to compare the performance of the proposed SAPSO. The summary of different inertia weight strategies reported earlier is given in Table 1 along with the required constraints.

4. The Proposed SAPSO Algorithm
The main benefit of PSO is that it is a problem independent, stochastic search optimization method. Due to its stochastic nature, it reveals insufficient global searching ability at the end of a run, particularly in multimodal functions. Hence to escape from local minima and increase the diversity of particle, the SA is integrated with PSO. Similar to PSO, SA is also a heuristic based random search global optimization method proposed by Kirkpatrick et al. [18]. During the search process, SA accepts not only better candidate solutions but also weakening solutions in certain degree according to Metropolis criterion [19], which can be mathematically represented by where denotes change in energy due to perturbation of parameters and is the current temperature of system. To validate (12), a random number is generated and checked for whether or not. A suitable cooling schedule, based on adaptive simulated annealing (ASA), is introduced here to update temperature of the system [20]. The annealing schedule for th iteration is written as follows: where is starting temperature, represents quenching factor, and is the dimension of search space. Finally, in our proposed method, a well constructed stopping condition is included to minimize the execution time and computational efforts. The proposed algorithm terminates when either of minimum values of objective function and final temperature or maximum number of iterations is reached [21].
The design of 2D recursive filter starts by assuming a population of random solutions in multidimensional search space. Here, each particle consists of 15 positional coordinates which is represented by
Therefore, 2D filter coefficients are represented as a vector , selected in the interval of [−3, 3]. Each entry of is optimized based on the flowchart presented in Figure 2 and the proposed design flow is given below.
Step 1. Specify desired filter specifications as given in (8). Assume initial temperature is and minimum level of temperature is , initialize , 4, or 8 and , and use (5) to calculate and for , before optimization starts.
Step 2. Initialize swarm size, minimum value of objective function (min_E), random population of coefficients vector and maximum number of evaluations .
Step 3. Set the iteration number , assign best solution , and use (2) for computing the fitness of the best solution .
Step 4 (calculate fitness). Compute fitness values for each particle in the swarm.
Step 5 (update ). Particle best position is updated based on Metropolis criterion (12) as follows: if current fitness of selected particle < fitness of , then current position of the particle is accepted as new with probability 1; otherwise current particle will be accepted according to , where and .
Step 6 (update ). Now assign best particles value to and calculate the new velocity and position by (9) and (10) for all particles.
Step 7 (reduce Temperature). Calculate new temperature specified in cooling schedule (13). If , then terminate searching for best solution; otherwise go to Step 8.
Step 8 (update velocities and positions). Calculate velocities using (9) and positions for each particle by (10).
Step 9. Update , and check whether or minimum value of cost function is achieved by any particle. If yes, then assign ; otherwise go back to Step 4.
Step 10. Design 2D filter based on coefficients of and compute the magnitude response.
5. Simulation Results and Discussion
In order to validate the performance of proposed SAPSO, we implemented the algorithm using MATLAB 2009 on Genuine Intel(R) Core 2 Duo CPU E7300 @2.66 GHz, 2 GB RAM. Here, all the associated algorithms are executed for 20 independent runs with 40,000 functional evaluations (FE). Table 2 shows the choice of several required parameters for GA and SAPSO based methods. In case of proposed SAPSO based methods, initial population size is chosen as 100. It is also observed that with the further increase in population size, the solution quality remains unchanged. In SAPSOTVIW, the inertia weight is decreased from 0.9 to 0.4 to obtain the best possible solutions whereas in SAPSORANDIW it is varied randomly in between 0.5 and 1. The starting temperature for annealing is chosen to be very high, so that it can escape from local optima easily. The parameters for cooling schedule are chosen to be very small to assure global convergence; otherwise it can skip the true global solutions [20, 21].

Table 3 shows an analysis of the experimental results of the proposed method with NN (2001), GA (2003), GENETICA (2006), TBIA (2008), QPSO DGM (2010), ASA, and PSOTVIW with respect to the best known solutions. The best results among different methods are highlighted in bold and it can be easily verified that the experimental results of the proposed method, that is, SAPSOTVIW and SAPSORANDIW, are better than all other reported methods [1–6]. For example, the percentage improvement in provided by the proposed SAPSOTVIW method in comparison to earlier reported methods [1–4, 6] is 23.4%, 52%, 5.23%, 2.54%, and 23.32%, respectively.

Further, in Table 4, for different values of , the best possible solutions (best) and worst solutions (worst) are listed for different methods which clearly reveal that the best/worst solutions obtained by the proposed SAPSO based algorithm are best for all runs. Furthermore, Table 5 presents the statistics for average (mean) and variance (VAR) of the optimal solutions for different values of . From Table 5, it can be observed that the proposed SAPSO algorithm exhibits better performance with less computational time (i.e., less than half time as compared to GA or ASA) and produces optimal solution with very lower variance. Hence, it can be implemented more efficiently in realtime designing of 2D filters.


Figures 3(a)–3(f) show the calculated amplitude response of 2D filter for . The best results obtained by SAPSOTVIW and SAPSORANDIW are shown in Figures 3(a) and 3(b), respectively. For comparison purpose, the results obtained by NN, GA, GENETICA, and TBIA are also included in Figure 3(c) to Figure 3(f). A closer look at these figures reveals that SAPSO based methods, that is, SAPSOTVIW and SAPSO RANDIW, yield a better approximation to the desired response and stopband ripple is much less with respect to other competitive methods [1–6]. Figures 4(a) and 4(b) show the magnitude response obtained using SAPSO based methods for whereas Figures 5(a) and 5(b) represents the magnitude response for .
(a)
(b)
(c)
(d)
(e)
(f)
(a)
(b)
(a)
(b)
Figures 6(a) and 6(b) show the best convergence profiles of SAPSO based methods for and 4, respectively. Among the 20 independent executions of four different algorithms (the existing two algorithms: GA and PSORANDIW and the proposed two algorithms: SAPSOTVIW and SAPSORANDIW), the best performed runs are plotted here. Initially for few thousands FEs, PSORANDIW converges faster than GA, SAPSORANDIW, and SAPSOTVIW. After certain number of iterations, the PSORANDIW and GA both exhibit premature convergence and settle to near optimal solutions. After 30,000 FEs, SAPSO based methods fall below the curves of both GA and PSO, because SA helps particles to jump out from local optima from the beginning of a search. Due to this, at the end of search procedure, the proposed hybrid method provides best candidate solution. Hence, the proposed hybrid algorithm produces better optimal solutions during the search process of 2D recursive filters.
(a)
(b)
6. Conclusions
In this paper, a novel hybrid evolutionary algorithm based on PSO and SA is proposed for finding the global best solution of secondorder, twodimensional recursive digital filters. The proposed hybrid method integrates the global search capability of PSO with SA to escape from local minima. The experimental results are compared with earlier reported algorithms, namely, NN, GA, GENETICA, and TBIA, in addition to different variants of PSO. The comparison results indicate that SAPSO based methods exhibit better performance in all experiments and provide best optimum solution during search mechanism. Also, the proposed method produces the best solution with lower mean and variance. Hence, the proposed approach could be an alternative to implement the twodimensional digital filters for realtime applications. Emphasis on higher order twodimensional recursive filter design with guaranteed stability and further reduction in computational complexity is left as future work.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
References
 V. M. Mladenov and N. E. Mastorakis, “Design of twodimensional recursive filters by using neural networks,” IEEE Transactions on Neural Networks, vol. 12, no. 3, pp. 585–590, 2001. View at: Publisher Site  Google Scholar
 N. Mastorakis, I. F. Gonos, and M. N. S. Swamy, “Design of twodimensional recursive filters using genetic algorithms,” IEEE Transactions on Circuits and Systems, vol. 50, no. 5, pp. 634–639, 2003. View at: Google Scholar
 I. F. Gonos, L. I. Virirakis, N. E. Mastorakis, and M. N. S. Swamy, “Evolutionary design of 2dimensional recursive filters via the computer language GENETICA,” IEEE Transactions on Circuits and Systems II, vol. 53, no. 4, pp. 254–258, 2006. View at: Publisher Site  Google Scholar
 J.T. Tsai, W.H. Ho, and J.H. Chou, “Design of twodimensional recursive filters by using Taguchibased immune algorithm,” IET Signal Processing, vol. 2, no. 2, pp. 110–117, 2008. View at: Publisher Site  Google Scholar
 D. T. Pham and E. Koç, “Design of a twodimensional recursive filter using the bees algorithm,” International Journal of Automation and Computing, vol. 7, no. 3, pp. 399–402, 2010. View at: Publisher Site  Google Scholar
 J. Sun, W. Fang, and W. Xu, “A quantumbehaved particle swarm optimization with diversityguided mutation for the design of twodimensional IIR digital filters,” IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 57, no. 2, pp. 141–145, 2010. View at: Publisher Site  Google Scholar
 F. Zhao, Q. Zhang, D. Yu, X. Chen, and Y. Yang, “A hybrid algorithm based on PSO and simulated annealing and its applications for partner selection in virtual enterprise,” in Proceedings of the International Conference on Advances in Intelligent Computing, pp. 380–389, Springer, 2005. View at: Google Scholar
 A. Jamili, M. A. Shafia, and R. TavakkoliMoghaddam, “A hybrid algorithm based on particle swarm optimization and simulated annealing for a periodic job shop scheduling problem,” International Journal of Advanced Manufacturing Technology, vol. 54, no. 1–4, pp. 309–322, 2011. View at: Publisher Site  Google Scholar
 F. Zhao, Y. Hong, D. Yu, Y. Yang, Q. Zhang, and H. Yi, “A hybrid algorithm based on particle swarm optimization and simulated annealing to holon task allocation for holonic manufacturing system,” International Journal of Advanced Manufacturing Technology, vol. 32, no. 910, pp. 1021–1032, 2007. View at: Publisher Site  Google Scholar
 L. Idoumghar, M. Melkemi, R. Schott, and M. I. Aouad, “Hybrid PSOSA type algorithms for multimodal function optimization and reducing energy consumption in embedded systems,” Applied Computational Intelligence and Soft Computing, vol. 2011, Article ID 138078, 12 pages, 2011. View at: Publisher Site  Google Scholar
 H.L. Shieh, C.C. Kuo, and C.M. Chiang, “Modified particle swarm optimization algorithm with simulated annealing behavior and its numerical verification,” Applied Mathematics and Computation, vol. 218, no. 8, pp. 4365–4383, 2011. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, vol. 4, pp. 1942–1948, 1995. View at: Publisher Site  Google Scholar
 Y. Shi and R. C. Eberhart, “Empirical study of particle swarm optimization,” in Proceedings of the IEEE International Congress on Evolutionary Computation, vol. 3, pp. 101–106, 1999. View at: Google Scholar
 M. Clerc and J. Kennedy, “The particle swarmexplosion, stability, and convergence in a multidimensional complex space,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 1, pp. 58–73, 2002. View at: Publisher Site  Google Scholar
 A. Nickabadi, M. M. Ebadzadeh, and R. Safabakhsh, “A novel particle swarm optimization algorithm with adaptive inertia weight,” Applied Soft Computing Journal, vol. 11, no. 4, pp. 3658–3670, 2011. View at: Publisher Site  Google Scholar
 J. C. Bansal, P. K. Singh, M. Saraswat, A. Verma, S. S. Jadon, and A. Abraham, “Inertia weight strategies in particle swarm optimization,” in Proceedings of the 3rd World Congress on Nature and Biologically Inspired Computing (NaBIC '11), pp. 633–640, Salamanca, Spain, October 2011. View at: Publisher Site  Google Scholar
 Y. Feng, G.F. Teng, A.X. Wang, and Y.M. Yao, “Chaotic inertia weight in particle swarm optimization,” in Proceedings of the 2nd International Conference on Innovative Computing, Information and Control (ICICIC '07), p. 475, Kumamoto, Japan, September 2007. View at: Publisher Site  Google Scholar
 S. Kirkpatrick, J. Gelatt, and M. P. Vecchi, “Optimization by simulated annealing,” Science, vol. 220, no. 4598, pp. 671–680, 1983. View at: Publisher Site  Google Scholar  MathSciNet
 N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller, “Equation of state calculations by fast computing machines,” The Journal of Chemical Physics, vol. 21, no. 6, pp. 1087–1092, 1953. View at: Publisher Site  Google Scholar
 L. Ingber, “Adaptive simulated annealing (ASA): lessons learned,” Journal of Control and Cybernetics, vol. 25, no. 1, pp. 33–54, 1996. View at: Google Scholar
 S. Dhabal and P. Venkateswaran, “An efficient nonuniform cosine modulated filter bank design using simulated annealing,” Journal of Signal and Information Processing, vol. 3, no. 3, pp. 330–338, 2012. View at: Google Scholar
Copyright
Copyright © 2014 Supriya Dhabal and Palaniandavar Venkateswaran. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.