Research Article

A Comparative Analysis of Visual Encoding Models Based on Classification and Segmentation Task-Driven CNNs

Figure 2

Comparison of the prediction accuracy between FCN32-based and VGG16-based encoding models in five visual areas of subject 1. (a) Prediction accuracies. The abscissa and ordinate represent the prediction accuracy of the FCN32-based encoding model and the VGG16-based encoding model, respectively. The orange dots represent the voxels that can be better predicted by the FCN32-based model than the VGG16-based model. The blue dots represent the opposite. And the black dots represent voxels with prediction accuracy less than 0.13. The green dashed lines indicate that the prediction accuracy is 0.13. (b) Distribution of the difference in prediction accuracies. The blue color denotes that the prediction accuracy is higher for the VGG16-based model. The orange color denotes that the prediction accuracy is higher for the FCN32-based model. The numbers on each side indicate the fraction of voxels with higher prediction accuracy under the model.
(a)
(b)