Research Article

An Agile and Efficient Neural Network Based on Knowledge Distillation for Scene Text Detection

Table 2

The experiment results with different losses. Pr: the model is obtained by pruning (the pruning ratio is 0.1); Ft: the model is fine-tuned; Fl: feature loss is adopted; Pl: probability loss is adopted. “,” “,” and “,” represent “Precision,” “Recall,” and “-score,” respectively.

MethodPruning ratio (%) (%) (%)

DBNet88.382.582.8
Pr0.183.48081.1
Pr+Ft0.185.580.182.7
Pr+Ft+Fl0.186.479.883
Pr+Ft+Fl+Pl0.186.580.483.3