Table of Contents
ISRN Machine Vision
Volume 2012, Article ID 834127, 10 pages
Research Article

On the Brittleness of Handwritten Digit Recognition Models

Seewald Solutions, Leitermayergasse 33, 1180 Vienna, Austria

Received 19 July 2011; Accepted 7 September 2011

Academic Editor: A. Torsello

Copyright © 2012 Alexander K. Seewald. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Handwritten digit recognition is an important benchmark task in computer vision. Learning algorithms and feature representations which offer excellent performance for this task have been known for some time. Here, we focus on two major practical considerations: the relationship between the the amount of training data and error rate (corresponding to the effort to collect training data to build a model with a given maximum error rate) and the transferability of models' expertise between different datasets (corresponding to the usefulness for general handwritten digit recognition). While the relationship between amount of training data and error rate is very stable and to some extent independent of the specific dataset used—only the classifier and feature representation have significant effect—it has proven to be impossible to transfer low error rates on one or two pooled datasets to similarly low error rates on another dataset. We have called this weakness brittleness, inspired by an old Artificial Intelligence term that means the same thing. This weakness may be a general weakness of trained image classification systems.