Review Article
Application of Deep Learning in Automated Analysis of Molecular Images in Cancer: A Survey
Table 1
Comparison of the performance of different deep learning-based segmentation methods.
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Notes. BRAST = multimodal brain tumor segmentation dataset, including four MRI sequences (T1W, T1-postcontrast (T1c), T2W, and FLAIR); CNN = convolutional neural networks; HGG = high-grade gliomas; ACC = accuracy; RF = random forests; DNN = deep neural network; Average = the average values of sensitivity, specificity, and precision; LGG = low-grade gliomas; PPV = positive predictive value; SEN = sensitivity; DSC = dice similarity coefficient; INPUTCASCADECNN = cascaded architecture using input concatenation; EM = expectation maximization algorithm; SPE = specificity; PREC = precision; GLISRT (glioma image segmentation and registration); CRF = conditional random fields; SDAE = stacked denoising autoencoder. |