Research Article  Open Access
Marek Vajgl, Irina Perfilieva, Petra Hod'áková, "Advanced FTransformBased Image Fusion", Advances in Fuzzy Systems, vol. 2012, Article ID 125086, 9 pages, 2012. https://doi.org/10.1155/2012/125086
Advanced FTransformBased Image Fusion
Abstract
We propose to use the modern technique of the Ftransform in order to show that it can be successfully applied to the image fusion. We remind two working algorithms (SA—the simple algorithm, and CA—the complete algorithm) which are based on the Ftransform and discuss, how they can be improved. We propose a new algorithm (ESA—the enhanced simple algorithm) which is effective in time and free of frequently encountered shortcomings.
1. Introduction
Image processing is nowadays one of the most interesting research areas where traditional and new approaches are applied together and bring significant advantages. In this contribution, we consider the image fusion, which is one of many subjects of image processing. The image fusion aims at integration of complementary distorted multisensor, multitemporal, and/or multiview scenes into one new image which contains the “best” parts of each scene. Thus, the main problem in the area of image fusion is to find the less undistorted scene for every given pixel.
A local focus measure is traditionally used for selection of an undistorted scene. The scene which maximizes the focus measure is selected. Usually, the focus measure is a measure of high frequency occurrences in the image spectrum. This measure is used when a source of distortion is connected with blurring which suppresses high frequencies in an image. In this case, it is desirable that a focus measure decreases with an increase of blurring.
There are various fusion methodologies currently in use. The methodologies differ according to different mathematical fields: statistical methods (e.g., using aggregation operators, such as the MinMax method [1]), estimation theory [2], fuzzy methods (see [3, 4]), optimization methods (e.g., neural networks, genetic algorithms [5]), and multiscale decomposition methods, which incorporate various transforms, for example, discrete wavelet transforms (for a classification of these methods see [6], a classification of waveletbased image fusion methods can be found in [7], and for applications for blurred and unregistered images, refer to [8]).
In our approach, we propose to use the modern technique of the Ftransform and to show that it can be successfully applied to the image fusion. Our previous attempts have been reported in [9–12]. The original motivation for the Ftransform (a short name for the fuzzy transform) came from fuzzy modeling [13, 14]. Similarly to traditional transforms (Fourier and wavelet), the Ftransform performs a transformation of an original universe of functions into a universe of their “skeleton models” (vectors of Ftransform components) in which further computation is easier. Moreover, sometimes, the Ftransform can be more efficient than its counterparts. The Ftransform proves to be a successful methodology with various applications: image compression and reconstruction [15, 16], edge detection [17, 18], numeric solution of differential equations [19], and timeseries procession [20].
The Ftransformbased approach to the image fusion has been proposed in [11, 12]. The main idea is a combination of (at least) two fusion operators, both are based on the Ftransform. The first fusion operator is applied to Ftransform components of scenes and is based on a robust partition of the scene domain. The second fusion operator is applied to the residuals of scenes with respect to inverse Ftransforms with fused components and is based on a finer partition of the same domain. Although this approach is not explicitly based on focus measures, it uses the fusion operator which is able to choose an undistorted scene among available blurred. In this contribution, we analyze two methods of fusion that have been discussed in [11, 12] and propose a new method which can be characterized as a weighted combination of those two. We show that •the new method is computationally more effective than the complete algorithm of fusion and has better quality than the simple algorithm of fusion, both have been proposed in [11, 12].
2. FTransform
Before going into the details of image fusion, we give a brief characterization of the Ftransform technique applied herein (we refer to [13] for a complete description).
Generally speaking, the Ftransform is a linear mapping from a set of ordinary continuous/discrete functions over domain onto a set of discrete functions (vectors) defined on a fuzzy partition of . We assume that the reader is familiar with the notion of fuzzy set and the way(s) of its representation. In this paper, we identify fuzzy sets with their membership functions. In the below given explanation, we will speak about the Ftransform of an image function which is a discrete function of two variables, defined over the set of pixels and taking value from the set of reals . Throughout this text, we will always assume that , and have the same meaning as above.
Let be an interval on the real line , , a number of fuzzy sets in a fuzzy partition of , and the distance between nodes , where , , . Fuzzy sets establish an uniform fuzzy partition of if the following requirements are fulfilled. (1)For every , if , where , ; (2)for every , is continuous on , where , ; (3)for every , ; (4)for every , ; (5)for every , is symmetrical with respect to the line .
The membership functions of the respective fuzzy sets in a fuzzy partition are called basic functions. The example of triangular basic functions , on the interval is given below: Let us remark that(1)the shape (e.g., triangular or sinusoidal) of a basic function in a fuzzy partition is not predetermined and can be chosen according to additional requirements, for example, smoothness, and so forth, see [13]; (2)if the shape of a basic function of a uniform fuzzy partition of is chosen, then the basic function can be uniquely determined by the number of points, which are “covered” by every “full” basic function where (in this case, we assume that ).
Similarly, a uniform fuzzy partition of the interval with basic functions can be defined. Then the fuzzy partition of is obtained by fuzzy sets . Below, we will always assume that denote quantities of fuzzy sets in fuzzy partitions of and , respectively.
Let and fuzzy sets , , , establish a fuzzy partition of . The (direct) Ftransform of (with respect to the chosen partition) is an image of the map defined by where . The value is called an Ftransform component of and is denoted by . The components can be arranged into the matrix representation as follows:
The inverse Ftransform of is a function on , which is represented by the following inversion formula, where : It can be shown that the inverse Ftransform approximates the original function on the domain . The proof can be found in [13, 14].
3. The Problem of Image Fusion
Image fusion aims at the integration of various complementary image data into a single, new image with the best possible quality. The term “quality” depends on the demands of the specific application, which is usually related to its usefulness for human visual perception, computer vision, or further processing. More formally, if is an ideal image (considered as a function of two variables) and are acquired (input) images, then the relation between each and can be expressed by where is an unknown operator describing the image degradation, and is an additive random noise. The problem of fusion consists in finding an image such that it is close to and it is better (in terms of a chosen quality) than any of . This problem occurs, for example, if multiple photos with focuses on different objects of the same scene are taken.
4. Image Decomposition for Image Fusion
Let us explain the mechanism of fusion with the help of the Ftransform. It is based on a chosen decomposition of an image. We distinguish a onelevel and a higherlevel decomposition. We assume that the image is a discrete real function defined on the array of pixels so that . Moreover, let fuzzy sets , , , where establish a fuzzy partition of .
We begin with the following representation of on : where is the inverse Ftransform of and is the respective first difference. If we replace in (7) by its inverse Ftransform with respect to the finest partition of , the above representation can then be rewritten as follows: We call (9) a onelevel decomposition of on .
If function is smooth, then the function is small, and the onelevel decomposition (9) is sufficient for our fusion algorithm. However, images generally contain various types of degradation that disrupt their smoothness. As a result, the function in (9) is not negligible, and the onelevel decomposition is insufficient for our purpose. In this case, we continue with the decomposition of the first difference in (7). We decompose into its inverse Ftransform (with respect to a finer fuzzy partition of with and basic functions, resp.) and the second difference . Thus, we obtain the secondlevel decomposition of on : In the same manner, we can obtain a higherlevel decomposition of on : where Below, we will be working with the two decompositions of that are given by (9) and (11).
5. Two Algorithms for Image Fusion
In [12], we proposed two algorithms: (i)the simple Ftransformbased fusion algorithm (SA) and (ii)the complete Ftransformbased fusion algorithm (CA). These algorithms are based on the decompositions (9) and (11), respectively.
The principal role in fusion algorithms CA and SA is played by the fusion operator , defined as follows:
5.1. Simple FTransformBased Fusion Algorithm
In this section, we give a “block” description of the SA without technical details which can be found in [12] and not be repeated here. We assume that input images with various types of degradation are given. Our aim is to recognize undistorted parts in the given images and to fuse them into one image.(i)Choose values , such that and create a fuzzy partition of by fuzzy sets . (ii)Decompose input images into inverse Ftransforms and error functions according to the onelevel decomposition (9). (iii)Apply the fusion operator (13) to the respective Ftransform components of and obtain the fused Ftransform components of a new image. (iv)Apply the fusion operator to the to the respective Ftransform components of the error functions , , and obtain the fused Ftransform components of a new error function. (v)Reconstruct the fused image from the inverse Ftransforms with the fused components of the new image and the fused components of the new error function.
The SAbased fusion is very efficient if we can guess values , that characterize a proper fuzzy partition. Usually, this is done manually according to user's skills. The dependence on fuzzy partition parameters can be considered as a main shortcoming of this otherwise effective algorithm. Two recommendations follow from our experience. (i)For complex images (with many small details), higher values of , give better results. (ii)If a triangular shape of a basic function is chosen, then the generic choice of , is such that the corresponding values of are equal to 3 (recall that is a number of points, which are covered by every full basic function ).
In this section, the algorithm SA is illustrated on examples “Table” and “Castle,” see Figures 2 and 4 below. There are two inputs of the image “Table” (Figure 1) and four ones of the image “Castle” (Figure 3).
(a)
(b)
(a)
(b)
(c)
(d)
5.2. Complete FTransformBased Fusion Algorithm
The CAbased fusion does not depend on one choice of fuzzy partition parameters (as in the case of the SA), because it runs through a sequence of increasing values . The description of the CA is similar to that of the SA except for the step 4 which is repeated in a cycle. Therefore, the quality of fusion is high, but the implementation of the CA is rather slow and memory consuming, especially for large images. For illustration, see Figures 5 and 6.
5.3. Fusion Artefacts
In this section, we characterize input images, for which it is reasonable to apply the SA or CA. By doing this, we put restrictions on inputs which are acceptable by the algorithms SA and CA. First of all, input images should be taken without shifting or rotation. Secondly, blurred parts of input images should not contain many small details like leaves on trees, and so forth. If it is so, then the fusion made by SA or CA can leave “artefacts,” like “ghosts” or “lakes,” see the explanation below where we assume that there are two input images for the fusion. (i)Ghosts: this happens when a sharp edge of a nondamaged input image is significantly blurred in the other one. As a result of the SA or CA, the edge is perfectly reconstructed, but its neighboring area is affected by the edge presence (see Figure 7).(ii)Lakes: this may happen in both cases when the fusion is performed by the SA or CA. In the case of SA, a “lake” is a result of choosing neighboring areas with significantly different colors from different input images. In the case of SA, a “lake” is a result of rounding off numbers (see Figure 8).
(a)
(b)
6. Improved FTransform Fusion
The main purpose of this contribution is to create a method which will be as fast as the SA and as efficient as the CA. The following particular goals should be achieved. (i)Avoid running through a long sequence of possible partitions (as in the case of CA). (ii)Automatically adjust parameters of a fusion algorithm according to a level of blurring and a location of a blurred area in input images.(iii)Eliminate situation which can lead to “ghosts” and “lakes” in a fused image.
6.1. Proposed Solution
The main idea of the improved Ftransform fusion is to enhance the SA by adding another run of the Ftransform over the first difference (7). Our explanation is as follows: the first run of the Ftransform is aimed at edge detection in each input image, while the second run propagates only sharp edges (and their local areas) to the fused image. The informal description of the enhanced simple algorithm (ESA) is given in Algorithm 1.

Although the algorithm ESA is written for gray scale input images, there is an easy way how to extend it to color images which are represented in RGB or YUV models. Our tests were performed for both of them. In the case of RGB, the respective R, G, or B channels were processed independently and then combined. In the second case of YUV, the Ypart of the model was used to achieve weights (this part contains the most relevant information about the image intensity), while the Upart and the Vpart were processed with the obtained weights.
Let us remark that the ESAfused images are (in general) better than each of the SA or CA. It can be visually seen on the chosen Figures 9 and 10. The main advantages of the ESA are as follows.(i)Time: the executing time is smaller than in the case of the CA (in the examples above it is as follows: 11 versus 111 (“Table”), 18 versus 359 (“Castle”). The quality of the ESA fusion is better than that of the SA. Examples of run times and memory consumption are presented in Table 1 (notice that the memory consumption significantly depends on memory management of implementation environment.) (ii)Ghosts: ghosts effect is reduced. The “ghost” effects (they are seen around the tower roof in the image “Castle” and around the buttons and the clock in the image “Table”) are removed as it can be seen in Figure 11. (iii)Lakes: lakes effect is eliminated. The “lakes” are almost eliminated as it can be seen from Figures 8, 9, and 12.

(a)
(b)
6.2. Comparison between Three Algorithms
In this section, we show that in general, the ESA fusion has better execution parameters than the SA or CA fusion. We experimented with numerous images which due to the space limitation cannot be presented in this paper. An exception is made for one image “Balls” with geometric figures to show how the fusion methods reconstruct edges. In Figure 13, two inputs of the image “Balls” are given, and in Figure 14, three fusions of the same image are demonstrated.
(a)
(b)
(a)
(b)
(c)
In Table 1, we demonstrate that the complexity (measured by the execution time or by the used memory) of the newly proposed ESA fusion is greater than the complexity of the SA and less than the complexity of the CA.
In Table 2, we demonstrate that the quality of fusion (measured by the values of MSE and PSNR) of the newly proposed ESA fusion is better (the MSE value is smaller) than the quality of the SA and in some cases (the image “Balls”) is better than the quality of the CA. Table 2 does not contain the values of MSE and PSNR for the image “Table,” because (as it happens in reality) there was no original (nondistorted) image at disposal.

7. Conclusion
In this paper, we continued our research started in [9–12] on effective fusion algorithms. We proposed the improved method of the Ftransformbased fusion which is free from following imperfections: long running time, dependence on initial parameters which characterize a proper fuzzy partition, and presence of fusion artefacts, like “ghosts” or “lakes”.
Acknowledgments
This work relates to Department of the Navy Grant N629091217039 issued by Office of Naval Research Global. The US Government has a royaltyfree license throughout the world in all copyrightable material contained herein. Additional support was given also by the SGS12/PRF/2012 (Image Processing and Artefacts Detection Using Soft Computing).
References
 R. S. Blum, “Robust image fusion using a statistical signal processing approach,” Information Fusion, vol. 6, no. 2, pp. 119–128, 2005. View at: Publisher Site  Google Scholar
 A. Loza, D. Bull, N. Canagarajah, and A. Achim, “NonGaussian modelbased fusion of noisy images in the wavelet domain,” Computer Vision and Image Understanding, vol. 114, no. 1, pp. 54–65, 2010. View at: Publisher Site  Google Scholar
 H. Singh, J. Raj, G. Kaur, and T. Meitzler, “Image fusion using fuzzy logic and applications,” in Proceedings of the IEEE International Conference on Fuzzy Systems, vol. 1, pp. 337–340, July 2004. View at: Google Scholar
 R. Ranjan, H. Singh, T. Meitzler, and G. R. Gerhart, “Iterative image fusion technique using fuzzy and neuro fuzzy logic and applications,” in Proceedings of the Annual Meeting of the North American Fuzzy Information Processing Society (NAFIPS '05), pp. 706–710, June 2005. View at: Publisher Site  Google Scholar
 A. Mumtaz, A. Majid, and A. Mumtaz, “Genetic algorithms and its application to image fusion,” in Proceedings of the 4th IEEE International Conference on Emerging Technologies 2008 (ICET '08), pp. 6–10, October 2008. View at: Publisher Site  Google Scholar
 G. Piella, “A general framework for multiresolution image fusion: from pixels to regions,” Information Fusion, vol. 4, no. 4, pp. 259–280, 2003. View at: Publisher Site  Google Scholar
 K. Amolins, Y. Zhang, and P. Dare, “Wavelet based image fusion techniquesan introduction, review and comparison,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 62, no. 4, pp. 249–263, 2007. View at: Publisher Site  Google Scholar
 F. Šroubek and and J. Flusser, “Fusion of blurred images,” in MultiSensor Image Fusion and Its Applications, Z. Liu and R. Blum, Eds., Signal Processing and Communications Series, CRC Press, San Francisco, Calif, USA, 2005. View at: Google Scholar
 M. Dañková and R. Valášek, “Full fuzzy transform and the problem of image fusion,” Journal of Electrical Engineering, no. 12, pp. 82–84, 2006. View at: Google Scholar
 I. Perfilieva and M. Dańková, “Image fusion on the basis of fuzzy transforms,” in Proceedings of the 8th International FLINS Conference on Computational Intelligence in Decision and Control, pp. 471–476, Madrid, Spain, September 2008. View at: Google Scholar
 I. Perfilieva, M. Daňková, P. Hod'áková, and M. Vajgl, “The use of Ftransform for image fusion algorithms,” in Proceedings of the International Conference of Soft Computing and Pattern Recognition (SoCPaR '10), pp. 472–477, December 2010. View at: Publisher Site  Google Scholar
 P. Hodáková, I. Perfilieva, M. Daňková, and M. Vajgl, “Ftransform based image fusion,” in Image Fusion, O. Ukimura, Ed., pp. 3–22, InTech, Rijeka, Croatia, 2011. View at: Google Scholar
 I. Perfilieva, “Fuzzy transforms: theory and applications,” Fuzzy Sets and Systems, vol. 157, no. 8, pp. 993–1023, 2006. View at: Publisher Site  Google Scholar
 I. Perfilieva, “Fuzzy transforms: a challenge to conventional transforms,” Advances in Imaging and Electron Physics, vol. 147, pp. 137–196, 2007. View at: Publisher Site  Google Scholar
 I. Perfilieva, V. Pavliska, M. Vajgl, and B. De Baets, “Advanced image compression on the basis of fuzzy transforms,” in Proceedings of the 12th International Conference Information Processing and Management of Uncertainty for KnowledgeBased Systems (IPMU '08), pp. 1167–1174, Malaga, Spain, 2008. View at: Google Scholar
 F. Di Martino, V. Loia, I. Perfilieva, and S. Sessa, “An image coding/decoding method based on direct and inverse fuzzy transforms,” International Journal of Approximate Reasoning, vol. 48, no. 1, pp. 110–131, 2008. View at: Publisher Site  Google Scholar
 I. Perfiljeva, M. Daňková, P. Hodáková, and M. Vajgl, “Edge detection using Ftransform,” in Proceedings of the 11th International Conference on Intelligent Systems Design and Applications (ISDA '11), pp. 672–677, Cordoba, Spain, 2011. View at: Google Scholar
 I. Perfiljeva, P. Hodáková, and P. Hurtík, “${F}^{1}$transform edge detector inspired by Canny's algorithm,” in Communications in Computer and Information Science. Advances on Computational Intelligence, pp. 230–239, Springer, Heidelberg, Germany, 2012. View at: Google Scholar
 M. Štĕpnička and M. R. Valášek, “Numerical solution of partial differential equations with help of fuzzy transform,” in Proceedings of the 2005 IEEE International Conference on Fuzzy Systems (FUZZIEEE '05), pp. 1153–1162, Reno, Nev, USA, 2005. View at: Google Scholar
 I. Perfilieva, V. Novák, V. Pavliska, A. Dvořák, and M. Štĕpnička, “Analysis and prediction of time series using fuzzy transform,” in Proceedings of the International Joint Conference on Neural Networks (WCCi '08), pp. 3875–3879, Hong Kong, Hong Kong. View at: Google Scholar
Copyright
Copyright © 2012 Marek Vajgl et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.