Table of Contents Author Guidelines Submit a Manuscript
International Journal of Biomedical Imaging
Volume 2014, Article ID 128324, 11 pages
http://dx.doi.org/10.1155/2014/128324
Research Article

A Framework for the Objective Assessment of Registration Accuracy

1Department of Psychology, Royal Holloway, University of London, Egham TW20 0EX, UK
2Department of Computer Science, University of Verona, 37134 Verona, Italy
3Department of Neurological, Neuropsychological, Morphological and Movement Sciences, University of Verona, 37126 Verona, Italy

Received 30 September 2013; Revised 26 December 2013; Accepted 27 December 2013; Published 10 February 2014

Academic Editor: Jun Zhao

Copyright © 2014 Francesca Pizzorni Ferrarese et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Validation and accuracy assessment are the main bottlenecks preventing the adoption of image processing algorithms in the clinical practice. In the classical approach, a posteriori analysis is performed through objective metrics. In this work, a different approach based on Petri nets is proposed. The basic idea consists in predicting the accuracy of a given pipeline based on the identification and characterization of the sources of inaccuracy. The concept is demonstrated on a case study: intrasubject rigid and affine registration of magnetic resonance images. Both synthetic and real data are considered. While synthetic data allow the benchmarking of the performance with respect to the ground truth, real data enable to assess the robustness of the methodology in real contexts as well as to determine the suitability of the use of synthetic data in the training phase. Results revealed a higher correlation and a lower dispersion among the metrics for simulated data, while the opposite trend was observed for pathologic ones. Results show that the proposed model not only provides a good prediction performance but also leads to the optimization of the end-to-end chain in terms of accuracy and robustness, setting the ground for its generalization to different and more complex scenarios.