Multimedia Data FusionView this Special Issue
Editorial | Open Access
Multimedia Data Fusion
Multimedia data is widely used in the world such as image, video, text, and audio. For obtaining better sensing performance, research on multimedia data fusion (MDF) is active and extensive around the world. In medical systems, a body part could be imaged with different sensors such as computed tomography and magnetic resonance imaging. In video surveillance, the interest is in the identification, recognition, and tracking of people by numerous cameras. All of these cases illustrate the importance of MDF in real-life applications. A number of mathematical methods have been researched in MDF such as statistics, fuzzy mathematical, stochastic differential theory, and computational methods.
As a special issue, we aim to reflect on the current and future theory and application of MDF in the following aspects: sensor fusion in multimedia medical data, sensor registration in 3D multimedia data, sensor selection for optimal sensor fusion, performance evaluation of the data fusion algorithms, joint data registration and fusion algorithms and the application of multimedia data fusion.
2. Sensor Fusion in Multimedia Medical Data
Multisensor fusion has applications ranging from video surveillance to daily life monitoring, and even battle field monitoring and sensing [1, 2]. Especially, with the development of internet in the recent years, researching on body sensors for human health care is causing more and more attention. How to guarantee that the sensors to will keep working well is a critical problem. That is, various sensors require a robust, real-time sensor fusion. At present, sensor fusion provided a new prospective method to solve the issues of stability and readability in multimedia medical data and has yielded the certain result. Based on laser scanners and video cameras, Chae et al. proposed an adaptive sensor fusion method which can compensate for horizontal bias between the two sensors. The results show that the proposed system can be successfully employed to obtain the peoples’ position in real-time system . In the recent years, the application of sensor and data fusion in telemedicine is becoming more and more wide. In order to enhance the robustness of sensors in the environmental disturbances or material limits, a data fusion method take into account that temporary sensor failures has been used to integrate the various sensor outputs in medical in-home telemonitoring . This fusion based on data from all available sensors is another type of sensor fusion. The sensor fusion applied to the development of medical robots is an important direction for future research. Avor et al. proposed a fusion matrix for multisensor data, and it can effectively determine the complex interactions between the sensors, muscular, and mechanical components of the human locomotor systems . This technique will reinforce modeling and building medical robots used for paraplegic rehabilitation as well as intelligent mobile robots for industrial manufacture.
3. Sensor Registration in 3D Multimedia Data
In order to analyze, compare, and integrate data from different sensors effectively, an important prerequisite is that these data should be transformed into a common reference frame, and the systematic errors and random noises should be removed. The 3D multimedia data is more complicated than the relative traditional data, which makes the sensor registration in dealing with such data appears to be especially important. For studying the cultural heritage objects, Camille et al. proposed a method that permits the registration of 3D models which are based on 3D multimedia data and multispectral acquisitions . In the orthopaedic field, a 3D scanner is an important tool, and the relevant 3D multimedia data must be of exact registration. Kouki et al. presented a method that can reproduce the dynamic motion of the knee profile with the 3D scanner and electromagnetic sensors . As a result, the proposed method could reproduce the dynamic motion of the bone profile of both of the knees. With the increasingly powerful sensors function, we can get more and more quantified 3D multimedia data. How to make good use of the data is a challenge we are facing, and sensor registration is a main concern in the field.
4. Sensor Selection for Optimal Sensor Fusion
Sensor fusion unites the output from several devices that recovers a particular property of the environment. Ordinarily, the sensors employed are a range finder, sonar, video camera, and tactile sensor. The problem of sensor fusion in optimal decision has drawn the attention of several researchers due to a lot of interest in the deployment of multiple sensors for transmission and surveillance purposes. Because of a restricted transmission capacity, the sensors are demanded to transmit their decision instead of the raw data upon which the decisions are based. Stelioset al. proposed the issue of optimal data fusion with a centralized fusion center applying the Neyman-Pearson (N-P) test. The optimal decision frame for the likelihood ratio at the fusion center is derived, and an improvement in the performance of the system beyond that of the most reliable sensor is feasible . Zhu et al. proposed the optimal compression matrix for sensor measurements. Such choices are quite relaxed. They optimally precompressed communications bandwidth between sensors and the fusion center and designed the minimum dimension of sensor data . Aranda et al. proposed a new decentralized control rules for the optimal positioning of sensor networks that track a target. The determinant of the Cramer-Rao Lower Bound and computation of it in the 2D and 3D cases is investigated. The motion coordination algorithms that manage the mobile sensor network to an optimal deployment are proposed and characterized . The optimal selection for fusion is emerging as an important research issue. Moreover, the optimality of selection can be determined based on various constraints such as the extent to which the task is accomplished, the complexity with which the task is fulfilled, and even the cost of applying the modalities for performing the task . As the optimal subset changes over time, how frequently it should be calculated so that the cost of re-computation can be reduced to meet the timeliness, is an open issue for investigators to consider .
5. Performance Evaluation of the Data Fusion Algorithms
In recent years, with the indepth application of data fusion theory in the field of target tracking, investigators have deeply understood that the multi-sensor data fusion is the key technology that optimized the performance of the tracking system. Thus, multi-sensor data fusion is to become a theory frontier, and generates a large amount of data fusion algorithms. Therefore, how to effectively evaluate the performance of these algorithms is a problem to be solved presently. The performance of the data fusion algorithms mainly includes two aspects: the fusion accuracy and calculation instantaneity. Chang et al. has given an analysis of the track-to-track fusion problem and described an analytical evaluation methodology for comparing performance of several fusion algorithms with information matrix. The average Root-Mean-Square error was used to evaluate the fused covariance matrix . Sun et al. proposed a new multi-sensor optimal information fusion algorithm weighted by matrices in the linear minimum variance sense. The correlation among local estimation errors is considered by the algorithm, and it involves the inverse of a matrix with high dimension . Jayaweera et al. dissected the Bayesian fusion performance of a stochastic Gaussian signal in a distributed sensor system under spectral and power constraints. Their sensor system optimization idea was based on Bhattacharya error exponent, which determines simple rules for optimal number of nodes under a global average power constraint. They analyzed the Bayesian fusion performance based on the Chernoff and Bhattacharya upper bounds to the fusion error probability . These performance evaluations are not very good in the commonality, apriority, and independence, and they may need to be studied further and to be improved.
6. Joint Data Registration and Fusion Algorithms
Joint data fusion is a problem that arises in some areas of science. It has especially received significant attention in the areas of geological- and satellite-based mapping and medical imaging [15, 16]. In , the authors investigated the problem on how a spatiotemporal image registration framework for registering dynamic positron emission tomography images can be used to describe the changing distribution of the tracer over time. Based on iterative rigid registration and diffusion-based deformation of reference segmentations, the authors proposed an algorithm for joint reconstruction and segmentation of serial section data. The researching results show that interleaving registration and segmentation allows the use of expert knowledge for both reconstruction and labeling . Analyzing longitudinal anatomical changes is a crucial component in many clinical scenarios. To process tissue slide without presegmentation, Jun Masumoto et al. have developed a similarity measure for volume registration using known joint distribution . A formulation for constructing spatiotemporal subject-specific models from longitudinal image data based on a generative model is presented . The proposed framework describes an approach that integrates fundamental concepts involving segmentation, registration, and atlas construction. Joint data registration and its applications is an important research topic.
We, the Guest Editors, are indebted to all of the reviewers who dedicated their valuable effort for reviewing the submitted papers. We also would like to thank all authors for their contribution to the special issue.
- M. A. Davenport, C. Hegde, M. F. Duarte, and R. G. Baraniuk, “Joint manifolds for data fusion,” IEEE Transactions on Image Processing, vol. 19, no. 10, pp. 2580–2594, 2010.
- Y. Zhang, H. Zhang, N. M. Nasrabadi, and T. S. Huang, “Multi-metric learning for multi-sensor fusion based classification,” Information Fusion, vol. 14, no. 4, pp. 431–440, 2013.
- Y. N. Chae, Y. J. Choi, Y. H. Seo, and H. S. Yang, “Robust people tracking using an adaptive sensor fusion between a laser scanner and video camera,” International Journal of Distributed Sensor Networks, vol. 2013, Article ID 521383, 7 pages, 2013.
- H. Medjahed, D. Istrate, J. Boudy, J.-L. Baldinger, and B. Dorizzi, “A pervasive multi-sensor data fusion for smart home healthcare monitoring,” in Proceedings of the IEEE International Conference on Fuzzy Systems (FUZZ '11), pp. 1466–1473, June 2011.
- J. K. Avor and T. Sarkodie-Gyan, “An approach to sensor fusion in medical robots,” in Proceedings of the IEEE International Conference on Rehabilitation Robotics (ICORR '09), pp. 818–822, June 2009.
- C. S. Chane, R. Schütze, F. Boochs, and F. S. Marzani, “Registration of 3D and multispectral data for the study of cultural heritage surfaces,” Sensors, vol. 13, no. 1, pp. 1004–1020, 2013.
- K. Nagamune, “Development of force measurement system by embedding force sensor to insert of knee prosthesis,” in Proceedings of the World Automation Congress (WAC '10), pp. 1–5, September 2010.
- S. C. A. Thomopoulos, R. Viswanathan, and D. C. Bougoulias, “Optimal decision fusion in multiple sensor systems,” IEEE Transactions on Aerospace and Electronic Systems, vol. 23, no. 5, pp. 644–653, 1987.
- Y. Zhu, E. Song, J. Zhou, and Z. You, “Optimal dimensionality reduction of sensor data in multisensor estimation fusion,” IEEE Transactions on Signal Processing, vol. 53, no. 5, pp. 1631–1639, 2005.
- S. Aranda, S. Martínez, and F. Bullo, “On optimal sensor placement and motion coordination for target tracking,” Automatica, vol. 42, no. 4, pp. 661–668, 2006.
- P. K. Atrey, M. A. Hossain, A. El Saddik, and M. S. Kankanhalli, “Multimodal fusion for multimedia analysis: a survey,” Multimedia Systems, vol. 16, no. 6, pp. 345–379, 2010.
- K. C. Chang, T. Zhi, and R. K. Saha, “Performance evaluation of track fusion with information matrix filter,” IEEE Transactions on Aerospace and Electronic Systems, vol. 38, no. 2, pp. 455–466, 2002.
- S.-L. Sun, “Multi-sensor optimal information fusion Kalman filters with applications,” Aerospace Science and Technology, vol. 8, no. 1, pp. 57–62, 2004.
- S. K. Jayaweera, “Bayesian fusion performance and system optimization for distributed stochastic Gaussian signal detection under communication constraints,” IEEE Transactions on Signal Processing, vol. 55, no. 4, pp. 1238–1250, 2007.
- J. Masumoto, Y. Sato, M. Hori et al., “A similarity measure for nonrigid volume registration using known joint distribution of targeted tissue: application to dynamic CT data of the liver,” Medical Image Analysis, vol. 7, no. 4, pp. 553–564, 2003.
- E. B. Gulsoy, J. P. Simmons, and M. de Graef, “Application of joint histogram and mutual information to registration and data fusion problems in serial sectioning microstructure studies,” Scripta Materialia, vol. 60, no. 6, pp. 381–384, 2009.
- F. Bollenbeck and U. Seiffert, “Joint registration and segmentation of histological volume data by diffusion-based label adaption,” in Proceedings of the 20th International Conference on Pattern Recognition (ICPR '10), pp. 2440–2443, August 2010.
- M. Prastawa, S. P. Awate, and G. Gerig, “Building spatiotemporal anatomical models using joint 4-D segmentation, registration, and subject-specific atlas estimation,” in Proceedings of the IEEE Workshop on Mathematical Methods in Biomedical Image Analysis (MMBIA '12), pp. 49–56, January 2012.
Copyright © 2013 Shangbo Zhou et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.