Multiscale Deep Network with Centerness-Aware Loss for Salient Object Detection
Table 2
Performance comparison with state-of-the-art methods on five popular saliency datasets. MAE (smaller is better), max F-measure (larger is better), and E-measure (larger is better) are used to measure the model performance.
Method
DUT-OMRON
DUTS
ECSSD
PASCAL-S
HKU-IS
Fm
MAE
Sm
Fm
MAE
Sm
Fm
MAE
Sm
Fm
MAE
Sm
Fm
MAE
Sm
VGG backbone
Amulet17
0.647
0.098
0.781
0.678
0.085
0.804
0.868
0.059
0.894
0.769
0.099
0.819
0.841
0.051
0.886
NLDF17
0.684
0.080
0.770
—
—
—
0.878
0.063
0.875
0.780
0.101
0.801
0.874
0.048
0.879
BMPM18
0.692
0.064
0.809
0.745
0.049
0.862
0.868
0.045
0.911
0.771
0.075
0.845
0.871
0.039
0.907
C2SNet18
0.683
0.072
0.798
0.716
0.063
0.828
0.864
0.055
0.893
0.769
0.083
0.835
0.851
0.048
0.883
RAS18
0.713
0.062
0.814
0.751
0.059
0.839
0.889
0.056
0.893
0.785
0.106
0.793
0.871
0.045
0.887
PiCANet18
0.710
0.068
0.826
0.749
0.054
0.861
0.885
0.046
0.914
0.801
0.079
0.849
0.870
0.042
0.906
CPD19
0.745
0.057
0.818
0.813
0.043
0.867
0.915
0.040
0.910
0.830
0.075
0.841
0.896
0.033
0.904
MINet20
0.741
0.057
0.821
0.823
0.039
0.875
0.922
0.036
0.919
0.840
0.066
0.852
0.904
0.031
0.912
ResNet backbone
DGRL18
0.733
0.062
0.806
0.794
0.050
0.842
0.906
0.041
0.903
0.827
0.073
0.837
0.890
0.036
0.894
PiCANet18
0.717
0.065
0.832
0.759
0.051
0.869
0.886
0.046
0.917
0.802
0.078
0.852
0.870
0.043
0.904
BASNet19
0.756
0.057
0.836
0.791
0.048
0.866
0.880
0.037
0.916
0.777
0.079
0.834
0.896
0.032
0.909
EGNet19
0.756
0.053
0.841
0.815
0.039
0.887
0.920
0.037
0.925
0.829
0.076
0.850
0.901
0.031
0.918
PoolNet19
0.747
0.056
0.836
0.809
0.040
0.883
0.915
0.039
0.921
0.828
0.076
0.849
0.899
0.032
0.917
CPD19
0.747
0.056
0.825
0.805
0.043
0.869
0.917
0.037
0.918
0.829
0.074
0.844
0.891
0.034
0.906
SCRN19
0.746
0.056
0.837
0.809
0.040
0.885
0.918
0.038
0.927
0.837
0.066
0.865
0.897
0.034
0.916
MINet20
0.755
0.055
0.833
0.828
0.037
0.884
0.925
0.034
0.925
0.840
0.066
0.854
0.909
0.029
0.919
GateNet20
0.746
0.055
0.838
0.807
0.040
0.885
0.916
0.040
0.920
0.830
0.071
0.854
0.899
0.033
0.915
Ours
0.774
0.054
0.846
0.837
0.038
0.889
0.926
0.033
0.927
0.847
0.064
0.863
0.913
0.029
0.922
ResNeXt backbone
R3Net18
0.747
0.062
0.815
—
—
—
0.914
0.040
0.910
0.803
0.095
0.803
0.894
0.036
0.895
GateNet20
0.762
0.051
0.849
0.816
0.035
0.897
0.917
0.035
0.929
0.827
0.065
0.865
0.903
0.030
0.925
Ours
0.791
0.049
0.860
0.857
0.034
0.900
0.934
0.032
0.933
0.858
0.061
0.870
0.923
0.025
0.929
Bold, italics, and underline indicate the best, second best, and third best performance. “—” means that the author has not provided corresponding saliency maps.