Journal of Control Science and Engineering

Volume 2008 (2008), Article ID 530803, 10 pages

http://dx.doi.org/10.1155/2008/530803

## Combining a Genetic Algorithm and Simulated Annealing to Design a Fixed-Order Mixed Deconvolution Filter with Missing Observations

Department of Information Technology, Ling Tung University, Taichung 408, Taiwan

Received 25 September 2008; Accepted 8 December 2008

Academic Editor: Ben Chen

Copyright © 2008 Jui-Chung Hung. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We introduce a new combination approach to a fixed-order mixed deconvolution filter with missing observations. The missing observations model is based on a probabilistic structure with the probability of the occurrence of missing data modeled as the unknown prior. The aim of the mixed criterion is to achieve optimal reconstruction and subject the norm constraint to the transfer function from the channel input to the filter error. For simplicity of implementation, the fixed-order model is interesting for engineers in signal processing and in practical applications. In this situation, the deconvolution filter design becomes a complicated nonlinear estimation problem. In this paper, we combine a genetic algorithm (GA) and simulated annealing (SA) to treat the signal reconstruction problem with missing observations. Finally, a numerical example is presented to illustrate the design procedure and confirm the robustness performance of the proposed method.

#### 1. Introduction

A deconvolution filter is used to remove the distortion of a channel and suppress the influence of additive noise in the transmission path. This problem has a wide range of applications in the field of engineering, such as equalization, seismology, and image restoration. When the signal is not corrupted by noise, the channel model is minimum phase, and none of the received data is missing; the inverse system is the simplest deconvolution filter. However, most of the deconvolution filters are not as easy as mentioned above since the signal is usually corrupted by noise; the channel may be nonminimun phase, and the received data may be missing. Therefore, various types of optimal deconvolution filters have been developed by minimizing different criteria [1–3].

The deconvolution problems generally have one of two frameworks: one is , the other is . Most of the deconvolution problems are solved via the optimal method in the time [4], and frequency domains [5]. When the channels are perturbed, the optimal filter is inadequate. A robust deconvolution filter design based on theory has received great attention for its robustness against system uncertainties [5–7]. However, the robustness design cannot achieve optimal performance. The proposed mixed optimal deconvolution filter design minimizes reconstruction performance subject to a robustness requirement based on the norm to attenuate the performance degradation due to the system's uncertainty [8]. This problem can be interpreted as a problem of optimal reconstruction filter design subject to a robustness constraint against the deterioration due to parameter variation in the channel.

In practical situations, the received data is not consecutive and contains missing observations [9]. The missing observations may be uniform or randomly occur and are caused by a variety of reasons, for example, intermittent sensor measurement failures, loss of collected data, jammed data, or data coming from a high-noise environment. If not properly taken into account, the missing observations can seriously deteriorate the quality of the optimal deconvolution filter. The design problem for a mixed deconvolution filter with missing observations is more difficult than the case with no missing observations. In general, the pattern of misses can be quite arbitrary. In this study, the missing observations are modeled as random Bernoulli pattern, in which each measurement has a fixed probability of being missing, and the misses are independent.

Furthermore, if the order of signal models and the channel dynamics are higher, they may lead to complexities in the deconvolution filter structure. In practical applications, fixed-order mixed optimal deconvolution filter designs are appealing for their simplicity of implementation and operation time savings. Thus, the fixed-order mixed deconvolution filter [5] is used; conventional optimization techniques cannot be employed to obtain a closed-form solution of the mixed optimal deconvolution filter with missing observations. In this study, the fixed-order mixed deconvolution filter design procedure is divided into two steps. The first step is based on a GA to search the neighborhood of the coefficients of the optimal deconvolution filter. In the second step, SA is developed to search the coefficients of the optimal deconvolution filter estimation.

The GA was introduced as an optimal search algorithm [10–14]. It is a parallel global search technique that emulates natural genetic operators such as reproduction, crossover, and mutation. At each generation, it explores different areas of the parameter space, and then directs the search to the region where there is a high probability of finding improved performance. Because the genetic algorithm simultaneously evaluates many points in parameter space, it is more likely to converge toward the global solution. In addition, it need not assume that the search space is differentiable or continuous, and it can also iterate several times on each piece of received data.

The SA algorithm [15–19] is a numerical simulation method based on the dynamics of crystallization. A solid may be heated until its constituents can move freely and it melts. Then, the melt is allowed to cool very slowly until it solidifies in a certain arrangement. The SA algorithm makes use of this idea. Essentially, the SA strategy is a serial search method, unlike the GA, which is a parallel search method. S. Geman and D. Geman [19] proved that if the cooling schedule is slow enough, the desired global extreme can always be found. Comparing GA and SA, we can find that GA exhibits fast initial convergence, but its performance deteriorates as it approaches the desired global extreme. Interestingly, SA shows a complementary convergence pattern, in addition to high accuracy. We combine the selected features from GA and SA to achieve weak dependence on initial parameters, parallel search strategy, fast convergence, and high accuracy [20, 21]. The GA/SA starts the search procedure as a pure-GA and ends as a pure-SA. The transition from GA to SA occurs when the fittest individual remains the same for generations. Hence, GA/SA is very suitable to treat the global optimization problem of the nonlinear parameter estimation with corrupting noise and missing observations. Finally, a numerical example is given to illustrate the design procedure of the proposed design method and to confirm the robustness performance of the fixed-order mixed deconvolution filter with missing observations.

#### 2. Fixed-Order Mixed Deconvolution Filter under Missing Observations

Consider a discrete deconvolution system with missing observations, as shown in Figure 1. The received signal is where is the inverse of (i.e., unit delay), , , and are assumed to be zero-mean white Gaussian noises with the following covariances: where denotes the expectation operator, , , and are assumed to be positive scales, and the system is assumed to have reached a statistical steady state.

The is the model of missing observations:

Thus, can be regarded as the measurements of with missing observations. The sequence is assumed to be asymptotically stationary and independent of . Furthermore, they are mutually independent. The probability of a missing measurement is [9] where is a fixed probability, independent of time. We let denote the estimate of . From Figure 1, the estimation error is given bywhere is the fixed-order deconvolution filter. The objective of this design is to find a fixed-order IIR deconvolution filter:

The power spectral density of is given by where superscript denotes the complex conjugate and is the power spectral density of .

In the fixed-order optimal deconvolution filter design case, a stable filter will be specified to minimize the mean square error (MMSE) [4]. The following expression for the performance index is derived according to Parseval's Theorem [22]: where is defined in (7), denotes integration around the unit circle. We assume the Laurent expansion of about the singular point, for . By the Residue theorem, the optimal deconvolution problem in (8) becomes the following minimization problem:

In practical deconvolution systems, the noise variances , , and may vary and the observations may be missing. Eliminating the performance degradation due to noise uncertainties and channel perturbation to guarantee the reconstruction performance is an important topic in practical signal deconvolution problems. The robustness design can guarantee the worst-case effect of these noise uncertainties or channel perturbation on the reconstruction performance to be less than a prescribed level [7], that is, if the norm of error spectrum is less than the sensitivity of reconstruction error to the noise uncertainties or channel perturbation must be less than from the energy point of view. In this study, in order to take advantage of both optimal reconstruction and less sensitivity to noise uncertainties and channel perturbation, a fixed-order deconvolution filter is specified to achieve the minimization of reconstruction error in (8) and at the same time to satisfy the following robustness requirement, where is a positive value. In other words, the optimal reconstruction in (8) and robustness in (10) should be satisfied simultaneously.

Searching the coefficients of via a genetic algorithm to solve the mixed deconvolution filter design problem is the key point in our design. This nonlinear complicated design problem will be treated by GA/SA in the next section.

#### 3. Deconvolution Filter Design

##### 3.1. GA for Fixed-Order Mixed Optimal Deconvolution Filter Design

The genetic algorithm is composed of three operations: (1) reproduction, (2) crossover, and (3) mutation [10–14]. These operations are implemented by performing the basic tasks of copying strings, exchanging portion of strings, and changing the state of bit from 1's to 0's or from 0's to 1's. These operations ensure that the optimum members of the population survive and their information contents are preserved and combined to generate better offspring, which then improves the next generation's performance. We describe the genetic algorithm in the next subsection [14].

###### 3.1.1. Fitness and Cost Function

In this study, the cost function (or energy) is defined as follows:

Our objective is to search to satisfy the constraint and then to achieve the minimization of (8). In a genetics-based design procedure, a chromosome generates a cost function and returns a value. The value of the cost is then mapped into a fitness value so as to fit into the genetic algorithm. The fitness value is a reward based on the performance of the possible solution represented by the string, or it can be thought of as how well a fixed-order deconvolution filter can be tuned according to the string to minimize the cost function . In this paper, we use windowing mapping [14]. Windowing mapping does not allow the fitness difference between good and bad chromosomes to become smaller then the decay of chromosomes that cause the performance of the algorithm to decrease. In this paper, the relationship between and is expressed as the following [5]: where and are the smallest and the largest values evaluated in the generation, and and are the corresponding fitness values [5, 14].

The following three operations are employed in the genetic algorithm to search for the global optimal solution (i.e., the best fitness) in (12) without becoming trapped at local minima [10–14].

###### 3.1.2. Reproduction

Reproduction is based on the principle of survival of the fitness. The fitness of the *i*th string is assigned to each individual string in the population where a higher implies a better fitness. Strings with a large fitness have a greater number of copies in the new generation, for instance, in the roulette wheel selection, the *i*th string with high fitness value is given a proportionately high probability of reproduction, . The distribution of can be presented as follows:

Once the strings are reproduced or copied for possible use in the next generation in a mating pool, they wait for the action of the other operators: crossover and mutation.

###### 3.1.3. Crossover

If the chromosomes are only reproduced, they search for the optimum existing individual and do not create any new individuals. Crossover provides a mechanism for strings to mix and match the desirable qualities through a random process. After reproduction, simple crossover proceeds in three steps. First, two newly-reproduced strings are selected from the mating pool, as produced by reproduction. Second, a position with the two strings is uniformly selected at random. The third step involves exchanging all characters by following the crossing site [14].

###### 3.1.4. Mutation

Reproduction and crossover are the primary functions that govern the search power of genetic algorithms. The third operation, mutation, enhances the ability of genetic algorithms to search for the optimal solution. Mutation is the occasional alternation of a value at a particular string position and an insurance policy against the permanent loss of any simple bit.

##### 3.2. SA for Fixed-Order Mixed Optimal Deconvolution Filter Design

###### 3.2.1. Introduction to Simulated Annealing

Metropolis [15] introduced a placeMonte Carlo approach to simulate the evolution to thermal equilibrium of a solid for a given temperature. Kirkpatrick et al. [16] found the analogy between minimizing the cost function of a combinatorial optimization problem and the slow cooling of a solid until it reaches its low-energy ground state. They found that the optimization process can be realized by applying the metropolis algorithm, by substituting cost for energy, configuration for state and viewing temperature as a control parameter. We can also view it as an enhanced version of the technique of local optimization, or iterative improvement, in which an initial solution is repeatedly improved by making small local alterations until no alteration can yield a better solution. SA is an optimization technique that simulates the physical process of heating up a solid and then cooling it down slowly until it crystallizes [17, 18]. It was implemented using a stepwise-exponential decrease of temperature. As the temperature decreases, the atomic energies decrease. A crystal with regular structure is obtained at the state where the system has minimum energy. When the cooling is carried out very quickly, which is known as *rapid quench*, extensive irregularity and defects are seen in the crystal structure. The system does not reach the minimum energy state and ends in a crystalline state, which has a higher energy. Given a temperature, the probability distribution of system energies can be described by the Boltzmann probability [17], where is the Boltzmann's constant, is the temperature and is the probability that the system is in a state with energy .

Note that is a number in the interval (0,1) when and are both positive, and so it can be interpreted as a probability that depends on and . When is very high, is closed to for all energy states according to equation. It can also be seen that there exists a small probability that the system might have high energy even at low temperatures. Therefore, the statistical distribution of energies allows the system to escape from a local energy minimum.

SA can be viewed as an algorithm that generates a sequence of Markov chains for a sequence of decreasing temperature values [17, 18]. At each temperature, the generation process is repeated until the probability distribution of the system states approaches the Boltzmann distribution. If the temperature is decreased slowly enough, the Boltzmann distribution tends to converge to global minimal states. The analysis of SA based on infinite length Markov chains has been carried out in the literature [18]. However, in any implementation of the algorithm the Markov chain is of finite length. Therefore, asymptotic convergence can only be approximated. Due to these approximations, the SA is no longer guaranteed to find a global minimum with probability 1.

###### 3.2.2. SA-Base Fixed-Order Mixed Optimal Deconvolution Filter Design

Simulated annealing consists of three components: (1) generation of neighbor candidate solution by perturbing the current solution, (2) accept the solution based on Boltzmann probability, and (3) iterative procedure.

The neighbor candidate solution of the first component is [16, 17] where is a random Gaussian number with zero mean and variance of . The is formulated as [17] where is the success ratio of the changes made during the last iterations, are near 1, and .

In the second part, a new solution is accepted as the current solution when its cost is cheaper than that of the current solution. If a new solution is of higher cost ( is lower), it is accepted with a probability of acceptance , given by

In the third part, the last accepted candidate solution becomes the initial solution for the next iteration. The temperature of the next iteration is reduced according to a cooling schedule. The temperature updating (decreasing) rule employed is the following [17]: where approaches 1 and .

From the above analysis, the SA-based deconvolution filter design is divided into the following steps.

(1)Initialize the annealing schedule parameters , randomly choose an initial state to satisfy the constraint and compute energy in (12).(2)Generate the new-state by as(3)Compute ; if random or new-state energy original-state energy () then the current-state = new-state.(4)If the temperature is the same in iterations, then decrease the temperature as , else .(5)Repeat step 2 to step 4 until the system is frozen. The flowchart of the SA procedure is presented in Figure 2.

##### 3.3. GA/SA for Fixed-Order Mixed Optimal Deconvolution Filter Design

In order to associate the initial fast convergence and weak dependence on initial parameters of GA with the high accuracy of SA, we combined both methods. The combined GA/SA always starts the search procedure as a pure GA and ends as a pure SA. The transition from GA to SA occurs when the following condition is satisfied. The fittest individual remains the same for generations [20]. This condition is satisfied whenever the algorithm converges to an intermediate solution. The solution thus far constitutes a good initial guess to SA. The SA's initial and final temperature, as well the step length can adjust as a function of the fittest individual's energy.

Based on the above analysis, the design procedure of GA/SA parameter estimation with noises and missing observations is divided into the following steps.

(1)Given the received data , the order of the deconvolution filter and robustness constraint and generate a random population of chromosomes.(2)Check the robustness constraint . If the robustness constraint is not satisfied, then renew the chromosomes.(3)Compute the performance from (8).(4)Compute the corresponding fitness value from (12).(5)Use the GA operators (reproduction, crossover, and mutation) to produce chromosomes of next generation.(6)Repeat the procedure from step 2 to step 5 until the fittest individual remains the same for generations.(7)Given the best chromosome, are the initial SA values.(8)Generate the new state from (19).(9)Check the robustness constraint . If the robustness constraint is not satisfied, then renew the new state.(10)Compute the new-state energy from (20).(11)Let ; if random or new-state energy original-state energy () then the current-state = new-state.(12)If the temperature is the same in iterations, then decrease the temperature as , and compute the from (16). Then, repeat the procedure from step 8 to step 12 until system is frozen.

#### 4. Design Example

A numerical example is given to demonstrate the feasibility of applying GA/SA to the design of fixed-order mixed optimal deconvolution filter with missing observations. The parameters of GA/SA are set as follows [4, 14, 21]: where is crossover probability and is the mutation probability. The number of evaluations is set to 10 000. In the case of the genetic-based estimation, the population size is , and the generation number is equal to 50. The simulated results are obtained by averaging 50 independent Monte Carlo(MC) runs.

Consider a nonminimum phase deconvolution system in Figure 1 with the received signal corrupted by colored noise such that where the missing sequence is a random process using Bernoulli modulations given in (3), with . The robustness constraint, signal model, channel model, and noise model are described as follows:

The driving signal with disturbance noise and measurement noise are also assumed to be independent, stationary, and white with zero mean and variances , , , respectively. The proposed algorithm is used to design two kinds of fixed-order deconvolution filters. Those results are summarized in Table 1. The second-order IIR(2) filters generated by the proposed algorithm are labeled , , and with , , and , respectively. The tenth order FIR(10) filters by proposed algorithm are labeled , , and with , , and , respectively. is a third-order IIR(3) filter with no missing data using the MMSE criterion [4]. The second-order (2) filter generated by the pure SA are labeled , , with , , and , respectively. The error variances of various criteria and robustness performance are tabulated in Tables 2, 3. Inspecting Tables 2, 3, the mean square error of the proposed method is smaller than that of the full order optimal deconvolution method [4] under different missing observations. Obviously, the reconstruction performance of the conventional optimal deconvolution filter deteriorates because of the missing observations. The reconstruction performance is improved significantly if the missing probability is considered in the design procedure in the case of missing observations. From Table 2, 3, it is evident that the robustness is greater than that of the MMSE deconvolution filter when the channel suffers from small perturbations, so the proposed fixed-order mixed is more appealing for practical applications. The convergence of the cost function of the second-order mixed optimal filter via GA/SA is shown in Figures 3, 5 with different probabilities of missing data. Note that the cost functions of the GA-based estimation method have exponential and rapid convergence at the beginning of generation, but its performance improves very slowly as it approaches the desired global extreme. The proposed GA/SA-based estimation algorithm always starts the search procedure as a pure-GA parameter estimation and ends as a pure-SA parameter estimation. From Figures 3, 5, the shift of algorithm occurs at the 15th generation in the case of , at the 23th generation in the case of , and at the 21th generation in the case of , respectively. From Figures 3, 5, it is seen that the convergence of the GA/SA-based algorithm is rapid initially and the performance is more accurate than that of the GA-based algorithm.

#### 5. Conclusion

In this paper, design methods of fixed-order mixed optimal deconvolution filters with missing observations have been introduced via GA/SA. This deconvolution filter design method takes advantage of optimal reconstruction performance and robustness against channel variation and noise uncertainties. Simulation results indicate that the proposed method behaves well even in cases with 10% to 30% missing observations. It is obvious that the reconstruction performance is improved significantly if the missing probability is considered in the proposed deconvolution filter design procedure in the case of missing data. The GA/SA design algorithm rapidly converges and the reconstruction performance is acceptable even if the order of the deconvolution filter is lower. The proposed design methods are suitable for lower order and deconvolution filter designs In addition, the proposed design method is easy to implement and saves operation time. Moreover, it is useful for practical application in the signal reconstruction problems with high-order channels and signal models and missing observations.

#### Acknowledgments

The authors are grateful to the anonymous referees, whose constructive and helpful comments led to significant improvements in the manuscript. This research was supported by the National Science Council under Grant no. NSC 97-2221-E-275-006.

#### References

- M. T. Silvia and E. A. Robinson,
*Deconvolution of Geophysical Time Series in the Exploration for Oil and Natural Gas*, Elsevier, New York, NY, USA, 1979. - D. W. Eaton and J.-M. Kendall, “Improving seismic resolution of outermost core structure by multichannel analysis and deconvolution of broadband SmKS phases,”
*Physics of the Earth and Planetary Interiors*, vol. 155, no. 1-2, pp. 104–119, 2006. View at Publisher · View at Google Scholar - C. Vural and W. A. Sethares, “Blind image deconvolution via dispersion minimization,”
*Digital Signal Processing*, vol. 16, no. 2, pp. 137–148, 2006. View at Publisher · View at Google Scholar - B.-S. Chen and S.-C. Peng, “Optimal deconvolution filter design based on orthogonal principle,”
*Signal Processing*, vol. 25, no. 3, pp. 361–372, 1991. View at Publisher · View at Google Scholar - B.-S. Chen and J.-C. Hung, “Fixed-order ${H}_{2}$ and ${H}_{\infty}$ optimal deconvolution filter designs,”
*Signal Processing*, vol. 80, no. 2, pp. 311–331, 2000. View at Publisher · View at Google Scholar - S. B. Gelfand, J. V. Krogmeier, and Y. Wei, “Uniform observability and exponential convergence rate of the Kalman filter for the FIR deconvolution problem,”
*Signal Processing*, vol. 81, no. 3, pp. 593–607, 2001. View at Publisher · View at Google Scholar - M. J. Grimble and A. El Sayed, “Solution of the ${H}_{\infty}$ optimal linear filtering problem for discrete-time systems,”
*IEEE Transactions on Acoustics, Speech, and Signal Processing*, vol. 38, no. 7, pp. 1092–1104, 1990. View at Publisher · View at Google Scholar - L. D. Davis, E. G. Collins Jr., and W. M. Haddad, “Discrete-time mixed-norm ${H}_{2}/{H}_{\infty}$ controller synthesis,”
*Optimal Control Applications and Methods*, vol. 17, no. 2, pp. 107–121, 1996. View at Publisher · View at Google Scholar - J.-M. Chen and B.-S. Chen, “System parameter estimation with input/output noisy data and missing measurements,”
*IEEE Transactions on Signal Processing*, vol. 48, no. 6, pp. 1548–1558, 2000. View at Publisher · View at Google Scholar - J. H. Holland, “Outline for a logical theory of adaptive systems,”
*Journal of the ACM*, vol. 9, no. 3, pp. 297–314, 1962. View at Publisher · View at Google Scholar - D. E. Goldberg,
*Genetic Algorithms in Search, Optimization, and Machine Learning*, Addison-Wesley, Reading, Mass, USA, 1989. - J.-C. Hung, “A genetic algorithm approach to the spectral estimation of time series with noise and missed observations,”
*Information Sciences*, vol. 178, no. 24, pp. 4632–4643, 2008. View at Publisher · View at Google Scholar - S. Manoharan and S. Shanmuganathan, “A comparison of search mechanisms for structural optimization,”
*Computers & Structures*, vol. 73, no. 1–5, pp. 363–372, 1999. View at Publisher · View at Google Scholar - S. N. Sivanandam and S. N. Deep,
*Introduction to Genetic Algorithms*, Springer, New York, NY, USA, 2007. - N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller, “Equation of state calculations by fast computing machines,”
*The Journal of Chemical Physics*, vol. 21, no. 6, pp. 1087–1092, 1953. View at Publisher · View at Google Scholar - S. Kirkpatrick, C. D. Gelatt Jr., and M. P. Vecchi, “Optimization by simulated annealing,”
*Science*, vol. 220, no. 4598, pp. 671–680, 1983. View at Publisher · View at Google Scholar - D. T. Pham and D. Karaboga,
*Intelligent Optimisation Technizues*, Springer, London, UK, 2000. - C. Andrieu and A. Doucet, “Simulated annealing for maximum a posteriori parameter estimation of hidden Markov models,”
*IEEE Transactions on Information Theory*, vol. 46, no. 3, pp. 994–1004, 2000. View at Publisher · View at Google Scholar - S. Geman and D. Geman, “Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images,”
*IEEE Transactions on Pattern Analysis and Machine Intelligence*, vol. 6, no. 6, pp. 721–741, 1984. View at Google Scholar - C. R. Zacharias, M. R. Lemes, and A. Dal Pino Jr., “Combining genetic algorithm and simulated annealing: a molecular geometry optimization study,”
*Journal of Molecular Structure: THEOCHEM*, vol. 430, no. 1–3, pp. 29–39, 1998. View at Publisher · View at Google Scholar - G.-C. Liao and T.-P. Tsao, “Application of a fuzzy neural network combined with a chaos genetic algorithm and simulated annealing to short-term load forecasting,”
*IEEE Transactions on Evolutionary Computation*, vol. 10, no. 3, pp. 330–340, 2006. View at Publisher · View at Google Scholar - A. Papoulis,
*Probability, Random Variables, and Stochastic Processes*, McGraw-Hill, New York, NY, USA, 2nd edition, 1984.