Review Article

The Value of Artificial Intelligence-Assisted Imaging in Identifying Diagnostic Markers of Sarcopenia in Patients with Cancer

Table 1

Summary of segmentation methods.

Author (year)PopulationMean age (year)LocalizationNeural networkSegmentation algorithmSegmentation ground truth

1Ackermans (2021) [19]Cancer surgery cases, colorectal, ovarian, pancreatic cancers (training); polytrauma patients (testing)Testing: 74L3 muscle (L3M), intramuscular adipose tissue (IMAT), visceral adipose tissue (VAT), and subcutaneous adipose tissue (SAT)DLNN2D U-NetManual segmentation using software (TomoVision software “sliceOmatic”)
2Borrelli (2021) [51]Lymphoma (training)
Prostate cancer (testing)
Training: 61
Testing: 67
L3CNNRECOMIA platform U-NetManual segmentation using cloud-based annotation tool (RECOMIA, http://www.recomia.org)
3Castiglione (2021) [52]Pediatric patients0-18Skeletal muscle area at the L3 level; 12-section or 18-section MIP imagesCNNU-NetManual segmentation
4Amarasinghe (2021) [49]Non-small-cell lung cancer67Skeletal muscle at the L3 vertebraCNN+DL2.5D U-NetsManual segmentation based on the Alberta protocol
5Kim (2021) [58]Gastric cancers receiving gastrectomy60.4L3CNNResNet-18Manual segmentation with software (Aquarius 3D workstation, TeraRecon)
6Magudia (2021) [61]Pancreatic adenocarcinoma52L3CNNDenseNet architecture model to predict spatial offset
U-Net architecture model for segment
Manual segmentation with software internal data set: sliceOmatic (TomoVision, Magog, Canada); external data set: OsiriX (Pixmeo, Bernex, Switzerland)
7Koitka (2021) [59]Individuals with abdominal CT scans (unknown patients)Training: 62.6
Test: 65.6
Whole abdomen and not just on L3 slicesCNNMultiresolution U-Net 3DFor annotation, the ITK Snap software (version 3.8.0) was used. Region segmentation was performed manually with a polygon tool
8Hsu (2021) [57]Pancreatic cancer67L3CNNResNet-18 model for slice
2D U-Net to segment
Manual annotated, expert labeled
9Zopfs (2020) [16]The Cancer Imaging Archive’s collection “CT Lymph Nodes” and the institutional picture archiving and communication system62Containing the abdomen and images above (cranial) and below (caudal) this regionDCNNU-NetManual segmentation
10Edwards (2020) [54]Adult patients18-75L3CNNSupervised U-NetManual segmentation
11Hemke (2020) [56]200 subjects49.9Pelvic contentDCNNU-NetManual segmentation using manual and semiautomated thresholding using the Osirix DICOM viewer (version 6.5.2, http://www.osirix-viewer.com/index.html)
12Burns (2020) [47]102 sequential patients68L1-L5CNNU-NetAnnotation utilizing ITK-SNAP software. Region segmentation was performed manually
13Paris (2020) [48]Critically ill, liver cirrhosis, pancreatic cancer, and clear cell renal cell carcinoma patients, renal and liver donorsTraining/validation: 52.6
Test: 53.9
L3DCNNAdapt U-NetManually segmented by using SliceOmatic (TomoVision, Montreal, Canada, version 4.2, 4.3, and 5.0)
14Blanc-Durand (2020) [46]Unknown subjectsN/AL3DCNN2D U-NetManually annotated using the public freeware 3DSlicer
15Park (2020) [62]Gastric cancer, pancreatic cancer, and sepsis and healthy individualsTraining: 56.1
Internal validation: 56.6
External validation: 61.1
L3CNNFCN-basedSemiautomated segmentation software (AsanJ-Morphometry) followed by manual correction
16Barnard (2019) [50]Older adults, who were current or former smokers71.6T12CNNU-NetManual segmentation using Mimics software (Materialise, Leuven, Belgium)
17Graffy (2019) [55]Asymptomatic adults57.1L3CNNU-NetManual segmentation
18Dabiri (2019) [53]Data from Cross Cancer Institute (CCI), University of Alberta, CanadaN/AL3 and T4CNNFCN with VGG16Manual segmentation using Slice-O-Matic V4.3 software (TomoVision, Montreal, Canada)
19Lee (2017) [60]Patients with lung cancer63L3CNNFCN of ImageNet pretrained modelSemiautomated threshold-based segmentation, followed by manual correction
20Shephard (2015) [63]N/AN/AN/AN/AN/A

L3M: L3 muscle; IMAT: intramuscular adipose tissue; VAT: visceral adipose tissue; SAT: subcutaneous adipose tissue; DLNN: deep learning neural network; CNN: convolutional neural network; MIP: maximum intensity projections; DL: deep learning; DCNN: deep convolutional neural network; N/A: not available; FCN: fully convolutional network.