Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2013 (2013), Article ID 573941, 8 pages
http://dx.doi.org/10.1155/2013/573941
Research Article

An Adaptive Filtering Algorithm Based on Genetic Algorithm-Backpropagation Network

1Remote Measurement and Control Key Lab of Jiangsu Province, School of Instrument Science & Engineering, Southeast University, Nanjing 210096, China
2School of Information and Control, Nanjing University of Information Science & Technology, Nanjing 210044, China
3Jiangsu Key Laboratory of Meteorological Observation and Information Processing, Nanjing University of Information Science & Technology, Nanjing 210044, China

Received 24 December 2012; Accepted 8 March 2013

Academic Editor: Yang Tang

Copyright © 2013 Kai Hu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

A new image filtering algorithm is proposed. GA-BPN algorithm uses genetic algorithm (GA) to decide weights in a back propagation neural network (BPN). It has better global optimal characteristics than traditional optimal algorithm. In this paper, we used GA-BPN to do image noise filter researching work. Firstly, this paper uses training samples to train GA-BPN as the noise detector. Then, we utilize the well-trained GA-BPN to recognize noise pixels in target image. And at last, an adaptive weighted average algorithm is used to recover noise pixels recognized by GA-BPN. Experiment data shows that this algorithm has better performance than other filters.

1. Introduction

Digital images are frequently corrupted by impulse noise during image gaining or transmission. Preservation of image details and attenuation of noise are the two important requirements of image processing, but they are contradictory in nature. So this research emphasis is on the removal of impulse noise while lessening the loss of details as low as possible.

Compared with traditional algorithms, nonlinear techniques have superiority and provide more satisfactory results while preserving image details. The most basic nonlinear filter is the standard median (SM) filter [1]. It replaces each pixel in the image by the median value of the corresponding neighborhood window centered at this pixel. The SM filter works effectively for low noise densities but at the cost of blurring the image. One solution to this problem is the weighted median (WM) filter [2] which gives more weight to some values within the window than others. It emphasizes or deemphasizes specific samples, because in most applications, not all samples are equally important. The special case of the WM filter is the centre weighted median (CWM) [3] filter which gives more weight only to the centre value of the window. However, these filters do not perform well at higher noise densities. Besides, the filtered image is blurred with poor preservation of the image details. Each pixel of the image is considered to be noisy which may not be in practice. These filters do not detect whether a pixel in the image is actually corrupted by impulse or not and simply replace each pixel by the median value. A better tactic to avoid this drawback is to incorporate some decision making or switching action in the filtering. Firstly, it finds out whether each pixel is contaminated or not. Then, recovery method is applied on the pixel only if it is corrupted by noise. Corrupted pixels are replaced by the median values, while the noise-free pixels are left unaltered. Since not every pixel is filtered, undue distortion can be avoided as far as possible.

In recent years, there been haveing many researchers try to combine GA with Artificial Neural Networks (ANNs), to find an effective solution to complex problem and have a better understanding to the relations between learning and evolution, which become an active topic in the field of artificial life [314].

This paper proposes making use of gene algorithm to improve backpropagation neural networks to find the most suitable network connection weights and network, then form a GA-BP model and apply it to image filtering. The experiment points out that it gains good performance in image recovery.

The rest of the paper is organized as follows. Section 2 impresses our work, including workflow and theories adopted. And Section 3 presents the experiment results of the proposed algorithm. Finally, Section 4 concludes this paper.

2. Our Work

2.1. Workflow

According to the function, the proposed algorithm can be divided into 3 stages: detector training, noise detection, and image recovery. It is shown in Figure 1(a).

fig1
Figure 1: Our workflow.

Figure 1(b) shows the working content of detector training. Those known noises we added have several characteristic: pixel’s value, pixel’s median in window size, pixel's ROLD, pixel's position. We used value, median and ROLD as input, and use position as output, intending to train a GA-BPN as a detector. After this step, we get a well-trained detector which is capable of noise pixels position detection.

In Figure 1(c), we can see from the process how noise pixels are detected. We enter the noised image itself (target image), pixel’s median in window size, and pixel’s ROLD into the detector mentioned above; then, we can estimate noised pixels position in target image.

At last, we used an adaptive weighted average algorithm which varies for windows size and distance to recover noise pixels which are recognized by GA-BPN. It is shown in Figure 1(d).

2.2. Related Theory
2.2.1. Noise Model

The classical salt and pepper noise is added to the value interval in , where is 0 or a small positive integer. The model for images with noise is described as follows [15]: where is the image with noise, is the original part of , is the noise part of (with usual value of 0 or 255), and is the noise density.

2.2.2. Rank-Ordered Logarithmic Difference

In the literatures [15], Dong et al. proposed a new local image statistic ROLD which is based on ROAD feature [16]. It can identify more noisy pixels with less false hits and can be well applied to deal with random-valued impulse noise. Simulation results show that it outperforms a number of existing methods both visually and quantitatively. Based on this report, we adopt it in our algorithm. Its definition is shown as follows.

Map the gray value of image to a value of by linear transformation, that is, . This is the gray value of the pixel in image at position . The image has a window with its center on pixel , and its size is ; pixel is one pixel in the window. The distance between the two pixels and is defined as where . The best distinction is achieved when and .

Sort all in a window in a descending order. If noise density %, then ROLD is the sum of the biggest 12 values sorted under a window size. Otherwise, it is the sum of the biggest 4 values that are sorted under a window size.

For a variation between the noise pixel and its adjacent pixel in the window, the ROLD value will be a high integer. Based on this value, the ROLD of a normal pixel should be smaller for the consistency between itself and its adjacent pixel. Therefore, ROLD serves as an important definition of pixels and distinguishes between noise pixels and normal pixels.

2.2.3. GA-BPN

Neural networks are parallel-processing-structures consisting of nonlinear processing elements interconnected by fixed or variable weights [17]. These structures can be designed to generate arbitrarily complex decision regions for specific mappings, so ANNs are well suited for use as detectors and classifiers. Classic pattern recognition algorithms require assumptions concerning the underlying statistics of the environment that generated the available data. Neural networks are nonparametric and effectively address a broad class of problems. Furthermore, neural networks have an intrinsic fault tolerance. Some “neurons” may fail, but the overall network can still perform well because the information relating the mapping from input to output is distributed across all of the elements of the network.

The process of training neural networks by the implementation of a gradient-based optimization algorithm (e.g., backpropagation) will lead to locally optimal solutions which may be far removed from the global optimum. However, evolutionary optimization methods offer a procedure to stochastically search for suitable weights and bias terms given a specific network topology [1835]. In consequence, so as to solve our problem, we can adopt GA-BPN as a noise detector.

Ye et al. designed a GA-BPN model in currency recognition and got good research result [32]. We used its steps in our algorithm. To improve BP neural networks global optimum by gene algorithm, we use all search characteristics of gene algorithm to find the most suitable network connection weights and network. Different network has different network weight numbers and thresholds. Thus, according to different networks, their best individuals can be generated by GA algorithm. Then, the fitness value of those best individual will be compared to select the best individual whose training times are fewer and fitness value is the largest to be used to build BP neural networks. By comparison, we adapted a three-layer neural network, while the number of input layer and output layer was determined by the original image samples. So our main work is to optimize hidden-layer nodes number [1835]. These steps can be expressed as follows.

Step 1. Organize all weights and thresholds together by sequence order, and generate chromosomes randomly.

Step 2. Design a fitness function which represents the reciprocal of the sum-squared error between network output, actual value, and desired value. Use it to calculate the fitness degree of each chromosome, and then determine whether it accords with the optimization criteria or cycles times. If it accords with it, then go to Step 4.

Step 3. According to the fitness degree, select individuals and produce new individuals according to certain crossover probability and mutation probability. Return to Step 2.

Step 4. Save the best individual. If cycle times are less than the number of networks, change hidden layer nodes number, and then return to Step 1; otherwise, comparison will be executed among the best individuals to select the best individual.

Step 5. Split the best individual to gain the initial weights and thresholds of BPN.

Step 6. Carry forward propagation for BP neural networks, and calculate overall error, and then determine whether it accords with the request. If it is, then stop training.

Step 7. If less than cycle times, then carry backpropagation for BP neural network; adjust the weights; return to Step 5; otherwise, stop network’s training.

2.3. Our Work
2.3.1. Detector Training

Ye et al. proposed a method applying to train the networks to improve their generalization of noise detection [32]. The method created a set of primitive chessboard images in [Pt1], which has a total size of pixels and comprises many pixels grids, whereby the values of 16 inner pixels are the same and random. Add noise with a different density to [Pt1], so we get a set of noised images [Pt2]. Compare [Pt1] and [Pt2], we get [Pt3], which represent the information of noise pixels. For example, Figure 1 is one group which we used to train the network which is adapted to upper 25% noise density. Figure 2(a) (belong to [Pt1]) is an original chessboard image. Add noise with different noise density level, we get Figure 2(b) (belonging to [Pt2]); then, by comparing Figures 2(a) and 2(b), we get Figure 2(c) (belonging to [Pt3]); which represent the noise information we added.

fig2
Figure 2: Training sample.

We use [Pt2] as input and [Pt3] as output to train GA-BPN. The so-called network training is an attempt to make its output closes to [Pt3] by adjusting the inner parameters under several iterative calculations.

2.3.2. Recovery Method

Noise pixels should be replaced by estimating the original value, and better noise filtering is generally achieved when the estimated value is closer to the original value. In this paper, we used an adaptive weighted average algorithm which varies for windows size and distance. This algorithm works as follows. (1) Set a filter window where the size is ( is an odd value with a value of ≥3); its original size is , and its center is on the pixel of the image. If the central point is not a noise point, shift the window to make its central point be the next pixel of the image. Go to Step when the central point is a noise point. (2)Calculate the sum of unpolluted pixels in . If or , go to Step . If not, go to Step .(3)Resize , , and go back to Step .(4)Using unpolluted pixels, calculate the output using formula (3), and replace the noise pixel. Consider the following:where represents an unpolluted pixel value, with position and window center on . is the weight; a pixel is more important if it is closer to the noise point. Therefore,

3. Experiment Data

3.1. Detection Performance

The accuracy of noise detection is an important condition for recent filters. The main indicators are wrong detected number (Wn) and false alarm ratio (FAR) [2226]. The performance of the proposed detector could be seen in Table 1, which shows data on the images taken of Lena, baboon, peppers, cameraman, and Barbara as test images, with 10% to 90% of noise density added.

tab1
Table 1: Noise detection performance test.

A statistical analysis revealed that the Wn and FAR of Lena, Cameraman, and Barbara are almost zero, but those of baboon and peppers are relatively large. These images, which have rich texture and detail, could still be improved; this issue will be the basis of our future study. The above indicators showed that the proposed method performs robustly in noise detection; therefore, it can be used for the next filtering experiment.

3.2. Recovery Performance

The qualitative assessment of the recovered image is done by forming a different (between the original and the recovered) image. For quantitative assessment of the restoration quality, the commonly used peak signal-to-noise ratio (PSNR) was used. Therefore, where is the total number of color components, is the total number of image pixels, and and are the th component of the noisy image pixel channel and its original value at pixel position “”, respectively.

Experiment 1. Add noise of 10%, 50%, and 90% densities to classical images. Use standard median filter (SMF), progressive switching median filter (PSMF), and adaptive median filter (AMF), including the proposed algorithm, to filter these noised images. Then, calculate their PSNR values, and data are shown in Table 2.
It is easy to see that, for every image, the PSNR value of the proposed algorithm is higher than those of others; indicating the universality of the proposed algorithm’s excellent performance.
In removing noise and preserving details, our algorithm showed a robust performance, which became more obvious under a higher noise density.

tab2
Table 2: PSNR of different algorithm when the image is corrupted by different noise density.

Experiment 2. Add 10% to 90% of noise densities to image Lena. Use several algorithms, including the proposed algorithm, to filter these noised images. Then, calculated their PSNR values, data are shown in Figure 3.
All PSNR data of the proposed method are higher than those of others, the last column titled “Improved degree” turn out the excellent improvement. It means that our algorithm’s filter performance is comprehensively better than other algorithms. The biggest improvement is observed in extremely high noise densities, at 90% noise density, it improved to 14.68 which made us surprised. The smallest improvement is obtained at 40% noise density; it raised 7.47 which also made us excities. In other different noise density, it also has an excellent improvement.
So proposed algorithm increased filter performance in all level noise densities considerably.

573941.fig.003
Figure 3: PSNR of several algorithms.

Experiment 3. Add 90% noise density to “Barbara.” Use several algorithms, including the proposed algorithm, to filter these noised images, and show them in Figure 4. Compare the results subjectively; Figures 4(b), 4(c), and 4(d) are blurred seriously. In contrast, our method performs better and can suppress the noise successfully while preserving more details. It is obvious that Figure 4(e) has amazing performance in recovery compared to other traditional algorithms. This points out that our algorithm has better performance than other traditional algorithms when the normal image is highly corrupted.

fig4
Figure 4: Barbara with 90% noise density.

Experiment 4. To further compare the capability of preserving image details [15], in Figure 5, we give the restored results for “baboon” with rich details but corrupted by 50% impulse noise. Restore it by SMF, AMF, PSMF, and the proposed algorithm, and show results in Figure 5. It is obvious that our algorithm is better than others. The experimental results show that the approach gives good results in highly complex and detail-rich images. We see that for the other methods, there are still some noticeable noises unremoved, and there exists some loss and discontinuity of the details, such as the hair around the mouth of the baboon and the edges of the bridge. In contrast, the visual qualities of our restored images are quite good, even with the abundance of image details and the high noise level present in the images.

fig5
Figure 5: Baboon with 50% noise density.
3.3. Computation Time

Computation time is an important indicator when weighing an algorithm; for it reflects whether the algorithm can be practically applied or not [15]. The proposed algorithm includes two: noise detection and noise filtering. Two independent experiments were done on these. Ten experiments were done on image Lena under 10% to 90% noise densities, and average results are shown in Figures 6 and 7. It shows that time of noise filtering is much greater than time of noise detection, and it grows when noise density grows. Totally, its computation time is acceptable. The running used was Pentium IV 2.0 G and 1 G RAM, and MATLAB 7.6 was also utilized.

573941.fig.006
Figure 6: Detection time.
573941.fig.007
Figure 7: Recovery time.

4. Conclusions

This paper proposes making use of GA-BPN as noise detector and ROLD feature as input, to do image filtering. From experiment data, compared with some other traditional filters, the proposed algorithm performed robustly in aspects of noise removal and detail preservation, and its merit will be more obvious under higher noise densities.

Conflict of Interests

The authors declare that they have no conflict of interests.

Authors’ Contribution

All authors drafted the paper, read, and approved the final paper.

Acknowledgments

This paper was funded under a grant from the National Natural Science Foundation of P.R. China (no. 61105115, no. 61075068, and no. 61172029), a major project to cultivate science and technology innovation from the Ministry of Education of China (no. 708045), Nanjing University of Information Science and Technology Research (no. 20070063), Open Project (KDX1102) of Jiangsu Key Laboratory of Meteorological Observation and Information Processing, National Department Public Benefit Research Foundation (GYHY200806017), and Industry-academic Joint Technological Innovations Fund Project of Jiangsu Province (2012t026).

References

  1. R. C. Gonzalez, R. E. Woods, and S. L. Eddins, Digital Image Processing Using MATLAB, Pearson Education India, 2004.
  2. T. Sun and Y. Neuvo, “Detail-preserving median based filters in image processing,” Pattern Recognition Letters, vol. 15, no. 4, pp. 341–347, 1994. View at Google Scholar · View at Scopus
  3. E. Abreu, M. Lightstone, S. K. Mitra, and K. Arakawa, “A new efficient approach for the removal of impulse noise from highly corrupted images,” IEEE Transactions on Image Processing, vol. 5, no. 6, pp. 1012–1025, 1996. View at Google Scholar · View at Scopus
  4. C. Xing, S. Wang, and H. Deng, “A new filtering algorithm based on extremum and median value,” Journal of Image and Graphics, vol. 6, no. 6, pp. 533–536, 2001. View at Google Scholar
  5. Z. Wang and D. Zhang, “Progressive switching median filter for the removal of impulse noise from highly corrupted images,” IEEE Transactions on Circuits and Systems II, vol. 46, no. 1, pp. 78–80, 1999. View at Google Scholar · View at Scopus
  6. R. H. Chan, C. W. Ho, and M. Nikolova, “Salt-and-pepper noise removal by median-type noise detectors and detail-preserving regularization,” IEEE Transactions on Image Processing, vol. 14, no. 10, pp. 1479–1485, 2005. View at Publisher · View at Google Scholar · View at Scopus
  7. I. V. Apalkov, P. S. Zvonarev, and V. V. Khryashchev, “Neural network adaptive switching median filter for image denoising,” in Proceedings of the International Conference on Computer as a Tool (EUROCON '05), pp. 959–962, November 2005. View at Scopus
  8. J. Zhang, Z. Lu, L. Shi et al., “Filtering images contaminated with salt and pepper noise with pulse-coupled neural networks,” Science in China E, vol. 34, no. 8, pp. 882–894, 2004 (Chinese). View at Google Scholar
  9. B. Majhi, P. K. Sa, and G. K. Panda, “ANN based adaptive thresholding for impulse detection,” in Proceedings of the 3rd IASTED International Conference on Signal Processing, Pattern Recognition, and Applications (SPPRA '06), pp. 294–297, February 2006. View at Scopus
  10. X. Min, Z. Wang, and J. Fang, “Temporal association based on dynamic depression synapses and chaotic neurons,” Neurocomputing, vol. 74, pp. 3242–3247, 2011. View at Google Scholar
  11. Q. Liu, H. Jin, X. Tang, H. Lu, and S. Ma, “A new extension of kernel feature and its application for visual recognition,” Neurocomputing, vol. 71, no. 10–12, pp. 1850–1856, 2008. View at Publisher · View at Google Scholar · View at Scopus
  12. T. Li and X. Ye, “Improved stability criteria of neural networks with time-varying delays: an augmented LKF approach,” Neurocomputing, vol. 73, no. 4–6, pp. 1038–1047, 2010. View at Publisher · View at Google Scholar · View at Scopus
  13. L. Weng, W. Cai, M. J. Zhang, X. H. Liao, and D. Y. Song, “Neural-memory based control of micro air vehicles (MAVs) with flapping wings,” in Proceedings of the 4th International Symposium on Neural Networks (ISNN '07), vol. 4491 of Lecture Notes in Computer Science, pp. 70–80, Nanjing, China, 2007.
  14. J. Xu, Y. Y. Cao, D. Pi, and Y. Sun, “An estimation of the domain of attraction for recurrent neural networks with time-varying delays,” Neurocomputing, vol. 71, no. 7–9, pp. 1566–1577, 2008. View at Publisher · View at Google Scholar · View at Scopus
  15. Y. Dong, R. H. Chan, and S. Xu, “A detection statistic for random-valued impulse noise,” IEEE Transactions on Image Processing, vol. 16, no. 4, pp. 1112–1120, 2007. View at Publisher · View at Google Scholar · View at Scopus
  16. N. Z. Janah and B. Baharudin, “Mixed impulse fuzzy filter based on MAD, ROAD, and genetic algorithms,” in Proceedings of the International Conference on Soft Computing and Pattern Recognition (SoCPaR '09), pp. 82–87, December 2009. View at Publisher · View at Google Scholar · View at Scopus
  17. D. B. Fogel, L. J. Fogel, and V. W. Porto, “Evolutionary methods for training neural networks,” in Proceedings of the IEEE Conference on Neural Networks for Ocean Engineering, pp. 317–327, August 1991. View at Scopus
  18. M. Xia, J. Fang, F. Pan, and E. Bai, “Robust sequence memory in sparsely-connected networks with controllable steady-state period,” Neurocomputing, vol. 72, no. 13–15, pp. 3123–3130, 2009. View at Publisher · View at Google Scholar · View at Scopus
  19. K. Hu, A. G. Song, W. L. Wang, Y. Zhang, and Z. Fan, “Fault detection and estimation for non-Gaussian stochastic systems with time varying delay,” Advances in Difference Equations, vol. 2013, article 22, 2013. View at Publisher · View at Google Scholar
  20. Y. C. Zhang and C. Chen, “An improved collaborative filtering algorithm based on bipartite network,” in Proceedings of the 7th International Conference on Fuzzy Systems and Knowledge Discovery (FSKD '10), pp. 2446–2449, August 2010. View at Publisher · View at Google Scholar · View at Scopus
  21. Q. Ran, “Edge information extraction algorithm for CT cerebrovascular medical image based on neural network,” Journal of Electronic Measurement and Instrument, vol. 4, article 010, 2010. View at Google Scholar
  22. M. Xia, Z. Wang, and J. Fang, “Sequence memory with dynamic synapses and chaotic neurons,” in Proceedings of the International Conference on Cognitive Neurodynamics (ICCN '08), pp. 219–223, 2008.
  23. C. Sun, Y. Feng, and Z. Ding, “New locally adaptive image interpolation algorithm based on edge preserving,” Chinese Journal of Scientific Instrument, vol. 31, no. 10, pp. 2279–2284, 2010. View at Google Scholar · View at Scopus
  24. G. Kaliraj and S. Baskar, “An efficient approach for the removal of impulse noise from the corrupted image using neural network based impulse detector,” Image and Vision Computing, vol. 28, no. 3, pp. 458–466, 2010. View at Publisher · View at Google Scholar · View at Scopus
  25. Z. Fang and J. J. W. Peizhen, “Learning-based resolution enhancement technique for single-frame coke micrograph,” Journal of Electronic Measurement and Instrument, vol. 7, article 005, 2011. View at Google Scholar
  26. Y. Hanada, M. Muneyasu, and A. Asano, “An improvement of unsupervised design method for weighted median filters using GA,” in Proceedings of the International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS '09), pp. 204–207, December 2009. View at Publisher · View at Google Scholar · View at Scopus
  27. B. Smolka, “On the adaptive impulsive noise attenuation in color images,” in Proceedings of the 3rd International Conference on Image Analysis and Recognition (ICIAR '06), vol. 4141 of Lecture Notes in Computer Science, pp. 307–317, Povoa de Varzim, Portugal, 2006.
  28. W. Zhang, Y. Tang, J.-A. Fang, and X. Wu, “Stability of delayed neural networks with time-varying impulses,” Neural Networks, vol. 36, pp. 59–63, 2012. View at Publisher · View at Google Scholar
  29. M. Xia, Y. Zhang, L. Weng, and X. Ye, “Fashion retailing forecasting based on extreme learning machine with adaptive metrics of inputs,” Knowledge-Based Systems, vol. 36, pp. 253–259, 2012. View at Publisher · View at Google Scholar
  30. K. Hu et al., “Edge detection of optimal wavelet scale space image,” Journal of Nanjing University of Information Science & Technology, vol. 3, no. 3, pp. 259–264, 2011. View at Google Scholar
  31. B. Q. Cao and J. X. Liu, “Currency recognition modeling research based on bp neural network improved by gene algorithm,” in Proceedings of the International Conference on Computer Modeling and Simulation (ICCMS '10), pp. 246–250, January 2010. View at Publisher · View at Google Scholar · View at Scopus
  32. X. L. Ye, L. Qian, and K. Hu, “An adaptive denoising method for salt and pepper noise detected by neural network,” Opto-Electronic Engineering, vol. 38, no. 3, pp. 119–124, 2011 (Chinese). View at Publisher · View at Google Scholar · View at Scopus
  33. F. Su, G. Fang, and N. M. Kwok, “Adaptive color feature identification in image for object tracking,” Mathematical Problems in Engineering, vol. 2012, Article ID 509597, 18 pages, 2012. View at Publisher · View at Google Scholar
  34. M. Yin, W. Liu, J. Shui, and J. Wu, “Quaternion wavelet analysis and application in image denoising,” Mathematical Problems in Engineering, vol. 2012, Article ID 493976, 21 pages, 2012. View at Publisher · View at Google Scholar
  35. M. Yin, W. Liu, J. Shui, and J. Wu, “Quaternion wavelet analysis and application in image denoising,” Mathematical Problems in Engineering, vol. 2012, Article ID 493976, 10 pages, 2012. View at Publisher · View at Google Scholar