Research Article

Application of an Interpretable Machine Learning Model to Predict Lymph Node Metastasis in Patients with Laryngeal Carcinoma

Figure 6

The model’s interpretation. (a) Features of importance derived from the XGBoost model. The plot shows the relative importance of the features in the XGBoost model. (b) SHAP summary plot of the 7 risk features of the XGBoost model. The higher the SHAP value of a feature, the higher the probability of LNM development. A dot is created for each feature attribution value for the model of each patient, and thus one patient is allocated one dot on the line for each feature. Dots are colored according to the values of features for the respective patient and accumulate vertically to depict density. Red represents higher feature values, and blue represents lower feature values. (c) SHAP dependence plot of the primary site. The SHAP dependence plot shows how a single feature affects the output of the XGBoost prediction mode. SHAP values for specific features exceed zero, representing an increased risk of LNM. Abbreviations: SHAP, Shapley additive explanation; XGBoost, extreme gradient boosting; LNM, lymph node metastasis.
(a)
(b)
(c)