Abstract

In this paper, opposition-based harmony search has been applied for the optimal design of linear phase FIR filters. RGA, PSO, and DE have also been adopted for the sake of comparison. The original harmony search algorithm is chosen as the parent one, and opposition-based approach is applied. During the initialization, randomly generated population of solutions is chosen, opposite solutions are also considered, and the fitter one is selected as a priori guess. In harmony memory, each such solution passes through memory consideration rule, pitch adjustment rule, and then opposition-based reinitialization generation jumping, which gives the optimum result corresponding to the least error fitness in multidimensional search space of FIR filter design. Incorporation of different control parameters in the basic HS algorithm results in the balancing of exploration and exploitation of search space. Low pass, high pass, band pass, and band stop FIR filters are designed with the proposed OHS and other aforementioned algorithms individually for comparative optimization performance. A comparison of simulation results reveals the optimization efficacy of the OHS over the other optimization techniques for the solution of the multimodal, nondifferentiable, nonlinear, and constrained FIR filter design problems.

1. Introduction

Digital filter is essentially a system or network that improves the quality of a signal and/or extracts information from signals or separates two or more signals which are previously combined. The linear time invariant (LTI) system and the filter are synonymous and are often used to perform spectral shaping or frequency selective filtering. The nature of this filtering action is determined by the frequency response characteristics, which depend on the choice of system parameters, that is, the coefficients of the difference equations. Thus, by proper selection of the coefficients, one can design frequency selective filters that pass signals with frequency components in some bands while attenuate signals containing frequency components in other frequency bands [1, 2]. There are different techniques for the design of FIR filters, such as window method and frequency sampling method. All these methods are based on approximation to the frequency characteristics of ideal filters. The design method is based on the requirements of ripples in the passband and the stopband, the stop band attenuation and the transition width. In the window method, ideal impulse response is multiplied with a window function. There are various kinds of window functions (Butterworth, Chebshev, Kaiser, etc.). These windows limit the infinite length impulse response of ideal filter into a finite window to design an actual response [35]. But the major drawback of windowing methods is that it does not allow sufficient control of the frequency response in the various frequency bands and other filter parameters such as transition width, and it tends to process relatively long filter lengths. The designer always has to compromise on one or other design specifications [6]. The conventional gradient-based optimization method [7] and other classical optimization algorithms [3, 4] are not sufficient to optimize multimodal and nonuniform objective functions of FIR filters, and the objective function cannot converge to the global minimum solution. So, evolutionary methods have been implemented in the design of optimal digital filters with better control of filter parameters and achievement of the highest stop band attenuation and the lowest stop band ripples.

Different evolutionary optimization techniques are reported in the literatures. When considering global optimization methods for digital filter design, GA [811] seems to have attracted a considerable attention. Although standard GA (also known as real-coded GA (RGA)) shows a good performance for finding the promising regions of the search space, they are inefficient in determining the global optimum and prone to revisiting the same suboptimal solution. In order to overcome the problem associated with RGA, orthogonal genetic algorithm (OGA) [12], hybrid Taguchi GA (TGA) [13] have been proposed. Tabu search [14], Simulated Annealing (SA) [15, 16], Bee Colony algorithm (BCA) [17], differential evolution (DE) [18, 19], seeker optimization algorithm [20], particle swarm optimization (PSO) [2123], opposition-based BAT (OBAT) algorithm [24], some variants of PSO like PSO with Quantum Infusion (PSO-QI) [25, 26], adaptive inertia weight PSO [27], chaotic mutation PSO (CMPSO) [28, 29], Novel PSO (NPSO) [30], Gravitational search algorithm (GSA) [31], seeker optimization algorithm (SOA) [32], some hybrid algorithms like DE-PSO [33] have also been used for the filter design problems with varying degree of comparative optimization effectiveness.

Most of the above algorithms show the problems of fixing algorithm’s control parameters, premature convergence, stagnation, and revisiting of the same solution over and again. In order to overcome these problems, in this paper, a novel optimization algorithm called opposition-based harmony search (OHS) is employed for the FIR filter design.

Tizhoosh introduced the concept of opposition-based learning (OBL) in [34]. This notion has been applied to accelerate the reinforcement learning [3537] and the back propagation learning [38] in neural networks. The main idea behind OBL is the simultaneous consideration of an estimate and its corresponding opposite estimate (i.e., guess and opposite guess) in order to achieve a better approximation for the current candidate solution. In the recent literature, the concept of opposite numbers has been utilized to speed up the convergence rate of an optimization algorithm, for example, opposition-based differential evolution (ODE) [39]. Can this idea of opposite number OBL be incorporated during the harmony memory (HM) initialization and also for generating the new harmony vectors during the process of HS? In this paper, OBL has been utilized to accelerate the convergence rate of the HS. Hence, our proposed approach has been called as opposition-based HS (OHS). OHS uses opposite numbers during HM initialization and also for generating the new HM during the evolutionary process of HS.

This paper describes the comparative optimal designs of linear phase low pass (LP), high pass (HP), band pass (BP), and band stop (BS) FIR digital filters using other aforementioned algorithms and the proposed OHS approach individually. The OHS does not prematurely restrict the searching space. A comparison of optimal designs reveals better optimization efficacy of the proposed algorithm over the other optimization techniques for the solution of the multimodal, nondifferentiable, highly nonlinear, and constrained FIR filter design problems.

The rest of the paper is arranged as follows. In Section 2, the FIR filter design problem is formulated. Section 3 briefly discusses the evolutionary approaches employed for the FIR filter designs. Section 4 describes the simulation results obtained by employing PM, RGA, PSO, DE, and OHS. Finally, Section 5 concludes the paper.

2. FIR Filter Design

Digital filters are classified as finite impulse response (FIR) or infinite impulse response (IIR) filter depending upon whether the response of the filter is dependent on only the present and past inputs or on the present and past inputs as well as previous outputs, respectively.

An FIR filter has a system function of the form given in the following: or where is called impulse response.

The difference equation representation is The order of the filter is , while the length of the filter (which is equal to the number of coefficients) is . The FIR filter structures are always stable and can be designed to have linear phase response. The impulse responses are to be determined in the design process and the values of will determine the type of the filter, for example, LP, HP, BP, BS, and so forth. The choice of filters is based on four broad criteria. The filters should provide the following: as little distortion as possible to the signal; flat pass band and exhibit high attenuation characteristics with as low as possible stop band ripples.

Other desirable characteristics include short filter length, short frequency transition beyond the cut-off point, and the ability to manipulate the attenuation in the stop band. In many filtering applications −3 dB frequency has become a recognizable parameter for defining the cut-off frequency (the frequency at which the magnitude attains an absolute value of 0.5). The consequence of using the 3 dB measure is that it varies with filter length since the sharpness of the transition width is a function of the filter order. Additionally, as the filter order increases, the transition width decreases, and approaches asymptotically [40, 41]. In any filter design problem, some of these parameters are fixed while others are optimized.

In this paper, the OHS is applied in order to obtain the actual filter response as close as possible to the ideal filter response. The designed FIR filter with individuals or particles/solutions is positive even symmetric and of even order. The length of is ; that is, the number of coefficients is also . In each iteration, these solutions are updated. Fitness values of updated solutions are calculated using the new coefficients and the new error fitness function. The solution obtained after a certain number of iterations or after the error fitness below a certain limit is considered to be the optimal result. The error is used to evaluate the error fitness function of the solution. It takes the error between the magnitudes of frequency responses of the ideal and the actual filters. An ideal filter has a magnitude of one on the pass band and a magnitude of zero on the stop band. The error fitness function is minimized using the evolutionary algorithms RGA, PSO, DE, and OHS, individually. The individuals that have lower error fitness values represent the better filter, that is, the filter with better frequency response.

The frequency response of the FIR digital filter can be calculated as where and or is the Fourier transform complex vector. The frequency is sampled with points in . One has the following: where represents the magnitude response of the ideal filter and for LP, HP, BP, and BS, it is given, respectively, as represents the approximate actual filter to be designed, and the is the number of samples.

Different kinds of fitness functions have been used in different literatures as given in the following [18, 19, 21] An error function given by the following equation is the approximate error function used in popular Parks-McClellan (PM) algorithm for digital filter design [3]: where is the weighting function used to provide different weights for the approximate errors in different frequency bands.

The major drawback of the PM algorithm is that the ratio of is fixed. In order to improve the flexibility in the error function to be minimized so that the desired levels of and may be individually specified, the error function given in the following equation has been considered as fitness function in [23, 26]: where and are the ripples in the pass band and the stop band, respectively and and are the pass band and stop band normalized edge frequencies, respectively.

In this paper, a novel error fitness function has been adopted in order to achieve higher stop band attenuation and to have moderate control on the transition width. The error fitness function used in this paper is given in (11). Using the following equation, it is found that the proposed filter design approach results in considerable improvement over the PM and other optimization techniques:

For the first term of (11), pass band including a portion of the transition band, and for the second term of (11), stop band including the rest portion of the transition band. The portions of the transition band chosen depend on pass band edge and stop band edge frequencies.

The error fitness function given in (11) represents the generalized fitness function to be minimized using the evolutionary algorithms, individually. Each algorithm individually tries to minimize this error and thus improves the filter performance. Since the coefficients of the linear phase FIR filter are matched, the dimension of the problem is thus halved. By only determining half of the coefficients, the FIR filter can be designed. This greatly reduces the computational burdens of the algorithms applied to the design of linear phase FIR filters.

3. Optimization Techniques Employed

Evolutionary algorithms stand upon some common characteristics like stochastic, adaptive, and learning in order to produce intelligent optimization schemes. Such schemes have the potential to adapt to their ever-changing dynamic environment through the previously acquired knowledge. Tizhoosh introduced the concept of opposition-based learning (OBL) in [34]. In this paper, OBL has been utilized to accelerate the convergence rate of the HS. Hence, our proposed approach has been called as opposition-based harmony search (OHS). OHS uses opposite numbers during HM initialization and also for generating the new harmony memory (HM) during the evolutionary process of HS. The other algorithms RGA, PSO, and DE considered in this paper are well known and not discussed here.

3.1. A Brief Description of HS Algorithm

In the basic HS algorithm, each solution is called a harmony. It is represented by an -dimension real vector. An initial randomly generated population of harmony vectors is stored in an HM. Then, a new candidate harmony is generated from all the solutions in the HM by adopting a memory consideration rule, a pitch adjustment rule, and a random reinitialization. Finally, the HM is updated by comparing the new candidate harmony vector and the worst harmony vector in the HM. The worst harmony vector is replaced by the new candidate vector if it is better than the worst harmony vector in the HM. The above process is repeated until a certain termination criterion is met. Thus, the basic HS algorithm consists of three basic phases. These are initialization, improvisation of a harmony vector, and updating the HM. Sequentially, these phases are described below.

3.1.1. Initialization of the Problem and the Parameters of the HS Algorithm

In general, a global optimization problem can be enumerated as follows: s.t. , where is the objective function, is the set of design variables, and is the number of design variables. Here, , are the lower and upper bounds for the design variable , respectively. The parameters of the HS algorithm are the harmony memory size (HMS) (the number of solution vectors in HM), the harmony memory consideration rate (HMCR), the pitch adjusting rate (PAR), the distance bandwidth (BW), and the number of improvisations (NI); NI is the same as the total number of fitness function calls (NFFCs). It may be set as a stopping criterion.

3.1.2. Initialization of the HM

The HM consists of HMS harmony vectors. Let represent the th harmony vector, which is randomly generated within the parameter limits [,  ]. Then, the HM matrix is filled with the HMS harmony vectors as in the following:

3.1.3. Improvisation of a New Harmony

A new harmony vector is generated (called improvisation) by applying three rules, namely, (i) a memory consideration, (ii) a pitch adjustment, and (iii) a random selection. First of all, a uniform random number is generated in the range . If is less than HMCR, the decision variable is generated by the memory consideration; otherwise, is obtained by a random selection (i.e., random reinitialization between the search bounds). In the memory consideration, is selected from any harmony vector in . Secondly, each decision variable will undergo a pitch adjustment with a probability of PAR if it is updated by the memory consideration. The pitch adjustment rule is given as follows where is a uniform random number between 0 and 1.

3.1.4. Updating of HM

After a new harmony vector is generated, the HM will be updated by the survival of the fitter vector between and the worst harmony vector in the HM. That is, will replace and become a new member of the HM if the fitness value of is better than the fitness value of .

The computational procedure of the basic HS algorithm can be summarized as shown in Algorithm 1.

Step 1. Set the parameters, HMS, HMCR, PAR, BW, NI and .
Step 2. Initialize the HM and calculate the objective function value for each harmony vector.
Step 3. Improvise the HM filled with new harmony vectors as follows:
     for
     if    then
                      //
       if   then
                   //
       end if
     else
                // 
    end if
     end for
Step 4. Update the HM as  if  .
Step 5. If  is completed, return the best harmony vector in the HM; otherwise go back to Step 3.

3.2. The Improved Harmony Search (IHS) Algorithm

The basic HS algorithm uses fixed values for PAR and BW parameters. The IHS algorithm, proposed by Mahdavi et al. [42], applies the same memory consideration, pitch adjustment, and random selection as the basic HS algorithm, but dynamically updates the values of PAR and BW as in (14) and (15), respectively:

In (14), is the pitch adjustment rate in the current generation (); and are the minimum and the maximum adjustment rates, respectively. In (15), is the distance bandwidth at generation (); and are the minimum and the maximum bandwidths, respectively.

3.3. Opposition-Based Learning: A Concept

Evolutionary optimization methods start with some initial solutions (initial population) and try to improve them toward some optimal solution(s). The process of searching terminates when some predefined criteria are satisfied. In the absence of a priori information about the solution, we usually start with random guesses. The computation time, among others, is related to the distance of these initial guesses from the optimal solution. We can improve our chance of starting with a closer (fitter) solution by simultaneously checking the opposite solution [34]. By doing this, the fitter one (guess or opposite guess) can be chosen as an initial solution. In fact, according to the theory of probability, 50% of the time a guess is further from the solution than its opposite guess [36]. Therefore, starting with the closer of the two guesses (as judged by its fitness) has the potential to accelerate convergence. The same approach can be applied not only to initial solutions but also continuously to each solution in the current population [36].

3.3.1. Definition of Opposite Number

Let be a real number. The opposite number is defined as

Similarly, this definition can be extended to higher dimensions [34] as stated in the next subsection.

3.3.2. Definition of Opposite Point

Let be a point in -dimensional space, where and for  all . The opposite point is completely defined by its components as

Now, by employing the opposite point definition, the opposition-based optimization is defined in the following subsection.

3.3.3. Opposition-Based Optimization

Let be a point in -dimensional space (i.e., a candidate solution). Assume is a fitness function which is used to measure the candidate’s fitness. According to the definition of the opposite point, is the opposite of . Now, if , then point can be replaced with ; otherwise, we continue with . Hence, the point and its opposite point are evaluated simultaneously in order to continue with the fitter one.

3.4. Opposition-Based Harmony Search (OHS) Algorithm

Similar to all population-based optimization algorithms, two main steps are distinguishable for HS, namely, HM initialization and producing new HM by adopting the principle of HS. In the present work, the strategy of the OBL [34] is incorporated in those two steps. The original HS is chosen as a parent algorithm, and opposition-based ideas are embedded in it with an intention to exhibit accelerated convergence profile. Corresponding pseudo code for the proposed OHS approach can be summarized as shown in Algorithm 2.

Step 1. Set the parameters,  , and .
Step 2. Initialize the HM with  .
Step 3. Opposition based HM initialization.
    for (
     for (
           // : Opposite of initial X 0
     end for
    end for
    // End of Opposition-based HM initialization.
    Select HMS fittest individuals from set of   as initial HM; HM being
    the matrix of fittest X vectors
Step 4. Improvise a new harmony  as follows:
    Update  by (14) and  using (15).
    for (
     for 
       if   then
                      //
         if  then
                  //
         end if
       else
                //
       end if
     end for
   end for
Step 5. Update the HM as  if 
Step 6. Opposition based generation jumping
    if                       // , :
    Jumping rate
     for (
      for (
       
       // : minimum value of the jth variable in the current generation (gn)
       // : maximum value of the jth variable in the current generation (gn)
        Select  fittest HM from the set of   as current HM.
       end for
      end for
     end if
     // End of opposition-based generation jumping.
Step 7. If   is completed, return the best harmony vector in the HM; otherwise go back to Step 4.

4. Results and Discussions

This section presents the simulations performed in MATLAB 7.5 for the design of LP, HP, BP, and BS FIR filters. Each filter order () is taken as 20, which results in the number of coefficients as 21. The sampling frequency is taken to be  Hz. The number of frequency samples is 128. Each algorithm is run for 50 times to obtain its best results. Table 1 shows the best chosen control parameters for RGA, PSO, DE and OHS, respectively.

The parameters of the filters to be designed using any algorithm are as follows: pass band ripple , stop band ripple . For the LP filter, pass band (normalized) edge frequency ; stop band (normalized) edge frequency ; transition width = 0.1. For the HP filter, stop band (normalized) edge frequency ; pass band (normalized) edge frequency ; transition width = 0.1. For the BP filter, lower stop band (normalized) edge frequency ; lower pass band (normalized) edge frequency ; upper pass band (normalized) edge frequency ; upper stop band (normalized) edge frequency ; transition width = 0.1. For the BS filter, lower pass band (normalized) edge frequency ; lower stop band (normalized) edge frequency ; upper stop band (normalized) edge frequency ; upper pass band (normalized) edge frequency ; transition width = 0.1. Tables 2, 3, 4, and 5 show the optimized filter coefficients obtained for LP, HP, BP, and BS FIR filters, respectively, using RGA, PSO, DE, and OHS, individually.

Table 6 shows the highest stop band attenuations for all four types of filters using OHS as 35.16 dB (for LP filter), 33.86 dB (for HP filter), 34.76 dB (for BP filter), and 32.45 dB (for BS filter) as compared to those of PM, RGA, PSO and DE. Tables 7, 8, 9 and 10 show the comparative results of performance parameters in terms of maximum and average stop band ripple (normalized), transition width (normalized) for LP, HP, BP and BS filters using PM, RGA, PSO, DE and OHS, respectively. It is also noticed that for almost same level of transition width and stop band ripple, OHS results in the best stop band attenuation among all algorithms for all types of filters. Tables 11, 12, 13, and 14 summarize maximum, mean, variance, and standard deviation for pass band ripple (normalized) and stop band attenuations in dB for LP, HP, BP and BS filters using all concerned algorithms.

In Table 15, OHS-based results are compared with other reported results. Oliveira et al. [15] have designed 30th-order BP filter with stop band attenuation and transition width of 33 dB and 0.1, respectively. A 20th-order LP filter has been designed by Karaboga and Cetinkaya [18] with transition width, pass band, and stop band ripples of 0.16, 0.08, and 0.09, respectively. Liu et al. [19] also reported for 20th-order FIR filter with transition width, pass band and stop band ripples of 0.06, 0.04, and 0.07, respectively. Najjarzadeh and Ayatollahi [21] have designed LP and BP filters of order 33 with approximate values of stop band attenuation 29 dB and 25 dB, respectively. A 30th-order FIR filter has been designed by Ababneh and Bataineh [23] with stop band attenuation, transition width, pass band, and stop band ripples of 30 dB, 0.05, 0.15, and 0.031, respectively. Sarangi et al. [26] have designed 20th-order FIR filters with stop band attenuation, transition width, pass band, and stop band ripples for LP filter of values 27 dB, 0.15, 0.1, and 0.06, respectively. For the same order BP these values are, respectively, as follows: 8 dB, 0.07, 0.2, and 0.05. Mondal et al. have reported the 20th-order HP filter [30] with stop band attenuation, transition width, pass band and stop band ripples of 34.03 dB, 0.0825, 0.129, and 0.02392, respectively. Luitel and Venayagamoorthy also have designed 20th-order LP filter with stop band attenuation, transition width, pass band and stop band ripples of 27 dB, 0.13, 0.291, and 0.270, respectively, as reported in [33].

In this paper OHS-based design is applied to 20th-order LP, HP, BP, and BS filters. maximum stop band attenuations of 35.16 dB, 33.86 dB, 34.76 dB, and 32.45 dB; maximum pass band ripples of 0.140, 0.140, 0.153, and 0.140; Maximum stop band ripples of 0.01746, 0.02027, 0.01828, and 0.02385; Transition width values of 0.0994, 0.1004, 0.0988, and 0.1069 are achieved, respectively, with LP, HP, BP, and BS filters. The above-mentioned values can be verified from the results presented in Table 15. Thus, it is observed from Table 15 that the stop band attenuations in all cases for the 20th-order filters using OHS are much better than the other reported results.

Figures 14 show the magnitude responses of the 20th-order LP filter in various forms using PM, RGA, PSO DE, and OHS, respectively. The magnitude responses in dB are plotted in Figure 1 for the same. The normalized magnitude responses are shown in Figure 2. Figure 3 shows normalized pass band ripple plots. Figure 4 shows normalized stop band ripple plots. Figures 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, and 16 show magnitude responses in dB, normalized magnitude responses, normalized pass band ripples, and normalized stop band ripples for HP, BP, and BS filters, respectively. From the above figures and tables, it is observed that OHS results in better magnitude responses (dB), better normalized magnitude responses, and better normalized stop band ripple plots for all filters, as compared to those of PM, RGA, PSO, and DE.

4.1. Comparative Effectiveness and Convergence Profiles of RGA, PSO, DE and OHS Algorithms

In order to compare the algorithms in terms of the error fitness values, the convergences of error fitness values obtained for the HP filter of order 20 for RGA, PSO, DE, and OHS are plotted as shown in Figures 17, 18, 19, and 20. Similar plots have also been obtained for the LP, BP, and BS filters of the same order, which are not shown here. OHS converges to much lower error fitness value as compared to RGA, PSO, and DE which yield suboptimal higher values of error fitness values. RGA converges to the minimum error fitness value of 4.088 in 18.676116 s; PSO converges to the minimum error fitness value of 2.575 in 17.2266628 s; DE converges to the minimum error fitness value of 1.803 in 17.3641545 s, whereas OHS converges to the minimum error fitness value of 1.248 in 18.96705 s.

For all types of filters, OHS converges to the least minimum error fitness values in finding the optimum filter coefficients in moderately less execution times, which may be verified from Tables 7, 8, 9, and 10. With a view to the above fact, it may be inferred that the performance of OHS is the best among all algorithms. All optimization programs were run in MATLAB 7.5 version on core (TM) 2 duo processor, 3.00 GHz with 2 GB RAM.

5. Conclusion

In this paper, a novel opposition-based harmony search (OHS) algorithm is applied to the solution of the constrained, multimodal FIR filter design problem, yielding optimal filter coefficients. Comparison of the results of PM, RGA, PSO, DE, and OHS algorithm has been made. It is revealed that OHS has the ability to converge to the best quality near optimal solution and possesses the best convergence characteristics in moderately less execution times among the algorithms. The simulation results clearly indicate that OHS demonstrates better performance in terms of magnitude response, minimum stop band ripple, and maximum stop band attenuation with a very little deterioration in the transition width. Thus, OHS may be used as a good optimizer for obtaining the optimal filter coefficients in any practical digital filter design problem of digital signal processing systems.