Gastroenterology Research and Practice

Gastroenterology Research and Practice / 2020 / Article

Research Article | Open Access

Volume 2020 |Article ID 3431290 |

Cheng Qu, Lin Gao, Xian-qiang Yu, Mei Wei, Guo-quan Fang, Jianing He, Long-xiang Cao, Lu Ke, Zhi-hui Tong, Wei-qin Li, "Machine Learning Models of Acute Kidney Injury Prediction in Acute Pancreatitis Patients", Gastroenterology Research and Practice, vol. 2020, Article ID 3431290, 8 pages, 2020.

Machine Learning Models of Acute Kidney Injury Prediction in Acute Pancreatitis Patients

Academic Editor: Piero Chirletti
Received27 Mar 2020
Revised19 Aug 2020
Accepted06 Sep 2020
Published29 Sep 2020


Background. Acute kidney injury (AKI) has long been recognized as a common and important complication of acute pancreatitis (AP). In the study, machine learning (ML) techniques were used to establish predictive models for AKI in AP patients during hospitalization. This is a retrospective review of prospectively collected data of AP patients admitted within one week after the onset of abdominal pain to our department from January 2014 to January 2019. Eighty patients developed AKI after admission (AKI group) and 254 patients did not (non-AKI group) in the hospital. With the provision of additional information such as demographic characteristics or laboratory data, support vector machine (SVM), random forest (RF), classification and regression tree (CART), and extreme gradient boosting (XGBoost) were used to build models of AKI prediction and compared to the predictive performance of the classic model using logistic regression (LR). XGBoost performed best in predicting AKI with an AUC of 91.93% among the machine learning models. The AUC of logistic regression analysis was 87.28%. Present findings suggest that compared to the classical logistic regression model, machine learning models using features that can be easily obtained at admission had a better performance in predicting AKI in the AP patients.

1. Introduction

Acute pancreatitis (AP) is an inflammatory abnormal condition of the exocrine pancreas, and most AP patients have mild disease courses and obtain recovery within one week [1]. There are about 20% of patients that will develop severe complications such as persistent organ failure and systemic inflammatory response syndrome (SIRS). Acute kidney injury (AKI) has long been recognized as a common and important complication of AP, and the incidence is as high as 10%-42% [2, 3]. Furthermore, AP patients concomitant with AKI suffer from a poor prognosis with a mortality of 25%-75% [47]. Hence, the early identification and timely management of AKI in AP patients seem very important. However, it is difficult to identify renal injury early depending on traditional indicators, and the main reason lies in that when there is an increase in serum creatinine (SCr) or a decrease in urine output, kidney damage has already occurred unstoppably [8].

Previous studies have identified a series of risk factors for predicting AKI, including triglyceride levels, age, male sex, procalcitonin, hypoxemia, abdominal compartment syndrome, and some biomarkers [9], and developed several AKI prediction models using classical regression methods [1012]. However, their predictive performance was rarely reported regarding the area under the receiver operating characteristic curve (AUROC), the primary measure of the prediction model [13]. Furthermore, the classical logistic regression model is sensitive to the multicollinearity of independent variables, which makes the model easy to underfit and far from accurate. Recently, artificial intelligence applications have been gradually implemented in the medical field by using machine learning [1416], having excellent performance in predicting complications compared to logistic regression analysis. Unsupervised learning and supervised learning are two types of machine learning used widely. Unsupervised learning such as random forest [17] and classification trees [18] allows the model to work on its own to discover information and mainly deals with the unlabeled data. Supervised learning such as extreme gradient boosting [19] learns from labeled training data and predicts outcomes for unforeseen data. However, there are few studies using machine learning approaches to predict acute kidney injury in AP patients.

Therefore, in this study, we aimed to develop AKI predictive models for AP patients by using different machine learning algorithms, mainly constituted of classification and regression tree (CART), random forest (RF), support vector machine (SVM), and extreme gradient boosting (XGBoost), as well as comparing the their predictive performances with those of the classical multivariable logistic regression (LR) methods.

2. Methods

2.1. Patients

We performed a retrospective observational study of AP patients admitted to the Center of Severe Acute Pancreatitis (CSAP) of Jinling Hospital, Nanjing, China, from January 2014 to January 2019. The center is a tertiary center for acute pancreatitis located in eastern China. Patients who met the following criteria were included: (1) diagnosis of AP and (2) admission to our department within one week after the disease onset. Patients who were older than 75 or younger than 18 already developed AKI before admission and suspected of chronic pancreatitis, pancreatic tumors, pancreatic trauma, and pregnancy were excluded to minimize bias. All the data were retrieved from a prospectively collected electronic database with the approval of the Acute Pancreatitis Database Management Committee. Informed consent from individuals was waived due to the retrospective, observational, and anonymous nature of the current study.

2.2. Definition

AP (ICD-10, K85) was diagnosed according to the definition in the 2012 revision of the Atlanta classification [20]. Acute kidney injury (AKI) (ICD-10: N17) was diagnosed and staged using the Kidney Disease: Improving Global Outcomes (KDIGO) criteria based on serum/plasma creatinine and urine output [8]. And the patient meeting the diagnosis during the whole hospitalization of AP is calculated into the AKI group. Alcohol abuse (ICD-10, F10) and smoking (Z72.0, Z86.43, and Z87.891) were identified using relevant diagnostic codes.

2.3. Data Collection

We collected data on demographic characteristics, previous medical history, physical examination, laboratory examination, and therapeutic treatments of each patient. Based on previous studies, we selected 23 possible risk factors for predicting AKI, including etiology, demographic data (age, gender, smoking, and alcohol abuse), body mass index (BMI), hypertension, intra-abdominal pressure (IAP), disease severity scores (APACHE II), acute respiratory distress syndrome (ARDS), and laboratory examination (amylase, lipase, triglyceride (TG), cholesterol, white blood cells (WBC109), c-reactive protein (CRP), interleukin-6 (IL-6), procalcitonin (PCT), total bilirubin (TBIL), alanine aminotransferase (ALT), hemoglobin (Hb), platelet (PLT), and prothrombin time (PT)). All of the data were available from the hospitalized patient electronic medical record system within 24 h after admission. However, the values of serum IL-6 levels were not complete (240 out of 334 total patients), so we filled the lost value with the mean of the remaining data.

2.4. Statistical Analysis

The population characteristics are presented using medians and interquartile ranges (IQR) for continuous variables and count and percentages for the dichotomous variables. For continuous variables, we used the Kolmogorov–Smirnov test to analyze the normalization of the distributed data and used Mann–Whitney tests to analyze nonnormally distributed data. A value < 0.05 was taken as statistically significant.

Prior to developing predictive models, the data collected were divided into 70% of the training dataset and 30% of the test dataset. The training dataset was used for developing predictive models using machine learning and logistic regression algorithms. The parameters of the models were continuously adjusted using tenfold cross-validation to reduce the chances of overfitting, and then, the final performance of each model was validated and compared in the test dataset. The area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and accuracy were adopted as the comparative measure between different models.

The modeling and statistical analyses were performed using Sklearn package version 0.19 ( and Python programming software version 3.6 (Python Software Foundation,

2.5. Logistic Regression Algorithms and Machine Learning Algorithms
2.5.1. Logistic Regression (LR)

The logistic regression model is a discrete selection and generalized linear regression analysis model [21]. It has been widely used in medicine, industry, and other areas. It uses the sigmoid function to map the predicted value to a probability value on to help judge the result (Figure 1(a)). This model can be applied to both continuous and categorical independent variables.

2.5.2. Classification and Regression Tree (CART)

The classification and regression tree [18] is a tree-like prediction model (Figure 1(b)). Each nonleaf node in the tree represents a feature value input by the model. The branch path under the node represents the possible attributes of the feature value. Each leaf node represents one or more samples, and the path taken from the root node to the leaf node represents the classification process of the sample. The decision tree itself has no specific requirements for the input eigenvalues and can be used for both numerical data (including continuous and discrete outcome) and logical or categorical data. The CART algorithm uses the Gini index to select the optimal feature. The Gini index represents the purity of the model, and its value is between 0 and 1.

2.5.3. Random Forest (RF)

The random forest is an integrated classifier with multiple decision trees [17], which belongs to the bagging algorithm (Figure 1(c)). There is no dependency between the weak learners that can be generated in parallel and fitted. The outputs of the weak learners are combined (by mean, mode, etc.) as a model output. The random forest is an evolved version of the bagging algorithm which uses a CART decision tree as a weak learner.

2.5.4. Support Vector Machine (SVM)

The support vector machine [22] is a supervised learning model applied to classification and regression problems. For linearly separable problems, the model constructs hyperplanes (sets) in a high-dimensional or infinite-dimensional space to separate samples; for linearly inseparable problems, the model chooses a suitable kernel function () to map the samples to a high-dimensional space that is much higher than the original space dimension, so that the samples are linearly separable in the high-dimensional space (Figure 1(d)).

2.5.5. Extreme Gradient Boosting (XGBoost)

The extreme gradient boosting (XGBoost) is an efficient system implementation of the Gradient Boosting Decision Tree (GBDT) algorithm, which belongs to the boosting algorithm [19, 23]. (1) Weak learner 1 is trained with initial weights from the training set, (2) the weights of the training samples are updated according to the learning error rate, (3) the weights of weak learner 1 are increased, (4) weak learner 2 will be trained with new weights, and this will be iterated until the number of weak learners reaches the specified number , and (5) finally, a total of weak learners are combined to obtain the final strong learner (Figure 1(e)).

3. Results

In this study, we extracted 23 features, including 17 continuous variables (Table 1) and 6 dichotomous variables (Table 2) from 334 AP patients who were admitted within one week after the AP onset. Among the study patients, finally, 80 patients (23.95%) developed AKI during the whole hospitalization among whom 13 patients suffered from AKI stage 1, 37 patients from stage 2, and 30 patients from stage 3 according to the KIDGO criteria.

No.Variable codeNon-AKI group ()AKI group () values

AKI stage (count, %)
Stage 113 (16.25%)
Stage 237 (46.25%)
Stage 330 (37.50)
1Age (year)45.85 (37.00, 54.00)46.86 (39.62, 51.55)0.559
2AMY (unit)422.53 (77.00, 457.00)773.33 (118.25, 840.75)0.098
3LPS (unit)1132.58 (240.00, 1234.00)1750.73 (330.25, 1442.25)0.087
4TG (mmol/L)5.76 (1.00, 5.70)7.034625 (2.30, 6.89)0.300
5Chol (mmol/L)4.841 (3.08, 5.60)4.58 (2.30, 6.89)0.533
6WBC (109/L)12.54 (9.10, 14.80)12.02 (8.35, 14.25)0.412
7CRP (mg/L)155.54 (94.10, 213.10)189.22 (147.35, 236.90)0.012
8IL-6 (pg/mL)156.96 (44.35, 161.15)283.66 (105.50, 174.50)0.08
9PCT (μg/L)2.17 (0.21, 1.99)13.61 (1.71, 16.98)<0.001
10TBIL (μmol/L)24.51 (14.60, 28.70)45.21 (18.85, 51.53)<0.001
11ALT (U/L)65.75 (20.00, 59.00)62.35 (22.25, 65.50)0.82
12Hb (g/L)125.39 (109.00, 141.00)114.75 (90.00, 137.25)0.006
13PLT (109/L)174.89 (126.00, 215.00)128.98 (84.50, 179.75)<0.001
14PT (s)13.56 (12.20, 13.70)13.59 (12.23, 14.48)0.971
15BMI25.37 (23.00, 27.00)26.60 (24.60, 29.08)0.005
16APACHE II9.87 (7.00, 13.00)18.28 (12.00, 17.00)<0.001
17IAP (mmHg)7.26 (5.00, 10.00)14.11 (12.00, 17.00)<0.001

Abbreviations: AMY: serum amylase; LPS: serum lipase; TG: triglycerides; Chol: cholesterol; WBC: white cell count; CRP: c-reactive protein; IL-6: interleukin-6; PCT: procalcitonin; TBIL: total bilirubin; ALT: alanine aminotransferase; Hb: hemoglobin; PLT: platelet; PT: prothrombin time; BMI: body mass index; APACHE II: Acute Physiology and Chronic Health Evaluation II; IAP: intra-abdominal pressure.

No.Variable codeVariable descriptionNon-AKIAKI values
CountPercent (%)CountPercent (%)

8Alcohol abuse6023.532835.000.020


Abbreviations: ARDS: acute respiratory distress syndrome; Death: death during the hospitalization; AKI: acute kidney injury.

The results showed that in comparison with patients in the non-AKI group, patients who suffered from AKI had a higher incidence of ARDS () and death (); higher BMI (), IAP (), and APACHE II scores (); and higher percentages of male sex () and alcohol consumption (), together with significantly higher serum levels of CRP (), PCT (), and TBIL (). The serum levels of Hb () and PLT () in the AKI group are lower compared with those in the non-AKI group.

3.1. Predictive Effects of Different Models

We generated five models, including LR (logistic regression), SVM (support vector machine), XGBoost (extreme gradient boosting), RF (random forest), and CART (classification and regression tree), to predict the development of AKI in AP patients after admission. Figure 2 shows the performance of 5 different models in predicting AKI on the test dataset in terms of receiver operating characteristic (ROC) curves. The areas under ROC curves (AUC) demonstrated that the XGBoost model achieved the best predictive effects for AKI with an AUC of 0.9193 compared with other models. Taking the LR model as a reference, the XGBoost model and RF model outperformed it in predicting AKI while the SVM model and CART model failed as shown by AUC values.

Table 3 presents a set of detailed performance metrics for the 5 models. As to all of the five metrics, the XGBoost achieved the best performance with the highest AUC (0.9193), the highest sensitivity (0.6190), the highest specificity (0.8815), and the second-highest accuracy (0.8631). The ranks of feature importance in each model are listed in Table 4. As shown, APACHE II, IAP, and PCT rank the top three features contributing to the development of the prediction models for AKI in AP patients.



Abbreviations: AUC: area under the receiver operating characteristic curve; LR: logistic regression; XGBoost: extreme gradient boosting; SVM: support vector machine; RF: random forest.



Abbreviations: RF: random forest; XGBoost: extreme gradient boosting; LR: logistic regression; APACHE II: Acute Physiology and Chronic Health Evaluation II; IAP: intra-abdominal pressure; PCT: procalcitonin; CRP: c-reactive protein; TBIL: total bilirubin; TG: triglycerides; LPS: serum lipase.

4. Discussion

Acute kidney injury (AKI) is a common complication of AP, and its incidence is 14%-43% [2, 3, 24]. According to relevant research reports, AKI developed by AP may be caused by the release of a large number of inflammatory mediators and cytokines, which lead to microcirculation disorders and tissue damage [25]. At the same time, hypercoagulability and SIRS may cause damage to renal tubules [26]. In this study, PCT is the second most important risk factor in the XGBoost. The clinical outcomes of AP patients complicated with AKI are extremely poor, and the mortality reported in the previous studies is up to 40-70% [6, 24]. Hence, it should be at the top of the priority list to identify high-risk patients and prevent their renal function from further deterioration.

We compared the performance of four machine learning models and the traditional logistic regression model to predict AKI in the early stage. The result showed that XGBoost achieved the best performance in predicting AKI in terms of the combined predictive performance and predictive stability. XGBoost is a scalable tree boosting system that is widely used by data scientists and provides state-of-the-art results on many problems. XGBoost helps to reduce overfitting compared to gradient tree boosting by only a random subset of descriptors in building a tree and is known as the “regularized boosting” technique. The balance between sensitivity and specificity for each of the algorithms should also be evaluated. In particular, XGBoost had higher specificity than sensitivity, meaning it is more prone to be correct in ruling out AKI than detecting it. Our results demonstrated that the XGBoost appears to be a very effective machine learning method in terms of specificity and accuracy.

We listed the features of the highest importance in the three best-performing models. The APACHE II score, IAP, PCT, and lipase turned out to be the top three most important features. The APACHE II score is a nonspecific scoring system, which is related to the severity and complications of AP [27, 28]. Previous studies found that the APACHE II score is an independent risk factor for AP complicated with AKI [29, 30]. The median APACHE II score of patients in the AKI group is much higher than that in the non-AKI group (18.28 vs. 9.87, ). IAP is the most important feature in the XGBoost model, and previous studies showed that IAP is the independent risk factor for AKI [3134]. Locally in the abdomen, intra-abdominal hypertension compresses and compromises blood flow in the renal parenchyma, vena cava, and renal vein. Increased IAP has a multitude of effects on the kidney through a series of mechanisms that result in a decrease in the glomerular filtration rate (GFR) with oliguria, which usually is the first clinical evidence of kidney impairment [35, 36]. Screening and intervention to decrease IAP and improve vital perfusion of the kidney are essential to minimize the negative effects [37].

Novel machine learning techniques are relatively free of these limitations of conventional statistical analysis and have demonstrated improved predictive performance compared to classical statistical methods, and machine learning has been used to predict AKI in some disease populations (e.g., severely burned patients and patients receiving liver transplants) and shows favorable performances [38, 39]. Compared with traditionally static predictive models, deep-learning techniques have the advantages in the ability to automatically learn the features and relationship of the readily available data [40], which makes the early prediction of AKI possible before the significant changes in classical indicators, for instance, creatinine and/or urine output. Earlier identification of renal injury with the easily obtained medical data at admission provides a “therapeutic window” for clinicians to take preventive measures to avoid further renal function damage.

Previous studies showed that early detection and treatment of AKI can help most patients recover renal function and reach a better clinical outcome [41, 42]. Therefore, it is particularly important to identify the risk factors and prognostic factors for acute pancreatitis with acute renal injury in the early stage, so as to develop a predictive model to help clinicians take preventive intervention measures and avoid renal function damage [43]. Our study provides a predictive model with machine learning algorithms that can give a better performance in predicting AKI of AP patients than the classical LR algorithm. A model using machine learning produced by our study may have a positive effect on the outcome of the AP patients.

Our study has several limitations. Firstly, our analysis used only a small number of cases from data derived from a single AP treatment center. There may be some differences in the performance of machine learning techniques when they are applied to a sample of a different institution with a different distribution of covariates. Secondly, the study does not use the models produced by the last 5 years in our center, in other centers, or in some open databases.

Compared to the classical logistic regression model, machine learning models (XGBoost and RF) using features that can be easily obtained at admission had a better performance in predicting AKI in the AP patients. Predictive models using machine learning algorithms may help clinicians predict AKI early and may prevent the renal function from further injury.

Data Availability

The data in this study are available for other researchers to verify the results of our article, replicate the analysis, and conduct secondary analyses. Other researchers can send e-mails (e-mail address: to contact us for obtaining our data.

Conflicts of Interest

The authors declare that there is no conflict of interest regarding the publication of this paper.

Authors’ Contributions

Cheng Qu and Lin Gao contributed equally to this work.


This study was supported by the National Natural Science Foundation of China (No. 81670588) and the Key Research and Development Program Foundation of Jiangsu Province of China (No. BE2016749).


  1. C. E. Forsmark, S. S. Vege, and C. M. Wilcox, “Acute pancreatitis,” The New England Journal of Medicine, vol. 376, no. 6, pp. 596–599, 2017. View at: Publisher Site | Google Scholar
  2. D. Ljutić, T. Piplović-Vuković, V. Raos, and P. Andrews, “Acute renal failure as a complication of acute pancreatitis,” Renal Failure, vol. 18, no. 4, pp. 629–633, 1996. View at: Publisher Site | Google Scholar
  3. L. Compañy, J. Sáez, J. Martínez et al., “Factors predicting mortality in severe acute pancreatitis,” Pancreatology, vol. 3, no. 2, pp. 144–148, 2003. View at: Publisher Site | Google Scholar
  4. K. Nasir and A. Ahamd, “Clinical course of acute pancreatitis in chronic kidney disease patients in a single kidney center (PGTi) in Karachi,” The Arab Journal of Nephrology and Transplantation, vol. 5, no. 2, pp. 87–90, 2012. View at: Google Scholar
  5. N. Petejova and A. Martinek, “Acute kidney injury following acute pancreatitis: a review,” Biomedical Papers of the Medical Faculty of the University Palacky, Olomouc, Czech Republic, vol. 157, no. 2, pp. 105–113, 2013. View at: Publisher Site | Google Scholar
  6. P. Kes, Ž. VuČIČEviĆ, I. RatkoviĆ-GusiĆ, and A. Fotivec, “Acute renal failure complicating severe acute pancreatitis,” Renal Failure, vol. 18, no. 4, pp. 621–628, 2009. View at: Google Scholar
  7. H. Y. Lin, J. I. Lai, Y. C. Lai, P. C. Lin, S. C. Chang, and G. J. Tang, “Acute renal failure in severe pancreatitis: a population-based study,” Upsala Journal of Medical Sciences, vol. 116, no. 2, pp. 155–159, 2011. View at: Publisher Site | Google Scholar
  8. K. Disease, “Kidney Disease: Improving Global Outcomes (KDIGO) acute kidney injury work group,” Kidney International, vol. 2, pp. 1–138, 2012. View at: Google Scholar
  9. H. Li, Z. Qian, Z. Liu, X. Liu, X. Han, and H. Kang, “Risk factors and outcome of acute renal failure in patients with severe acute pancreatitis,” Journal of Critical Care, vol. 25, no. 2, pp. 225–229, 2010. View at: Publisher Site | Google Scholar
  10. J. Wu, Z. Xu, H. Zhang et al., “Clinical study on the early predictive value of renal resistive index in acute kidney injury associated with severe acute pancreatitis,” Zhonghua Wei Zhong Bing Ji Jiu Yi Xue, vol. 31, no. 8, pp. 998–1003, 2019. View at: Publisher Site | Google Scholar
  11. X. Chai, H. B. Huang, G. Feng et al., “Baseline serum cystatin C is a potential predictor for acute kidney injury in patients with acute pancreatitis,” Disease Markers, vol. 2018, Article ID 8431219, 7 pages, 2018. View at: Publisher Site | Google Scholar
  12. C. Wu, L. Ke, Z. Tong et al., “Hypertriglyceridemia is a risk factor for acute kidney injury in the early phase of acute pancreatitis,” Pancreas, vol. 43, no. 8, pp. 1312–1316, 2014. View at: Publisher Site | Google Scholar
  13. K. H. Zou, A. J. O’Malley, and L. Mauri, “Receiver-operating characteristic analysis for evaluating diagnostic tests and predictive models,” Circulation, vol. 115, no. 5, pp. 654–657, 2007. View at: Publisher Site | Google Scholar
  14. A. Gupta, T. Liu, S. Shepherd, and W. Paiva, “Using statistical and machine learning methods to evaluate the prognostic accuracy of SIRS and qSOFA,” Healthcare Informatics Research, vol. 24, no. 2, pp. 139–147, 2018. View at: Publisher Site | Google Scholar
  15. S. Le, J. Hoffman, C. Barton et al., “Pediatric severe sepsis prediction using machine learning,” Frontiers in Pediatrics, vol. 7, article 413, 2019. View at: Publisher Site | Google Scholar
  16. D. Arefan, A. A. Mohamed, W. A. Berg, M. L. Zuley, J. H. Sumkin, and S. Wu, “Deep learning modeling using normal mammograms for predicting breast cancer risk,” Medical Physics, vol. 47, no. 1, pp. 110–118, 2019. View at: Publisher Site | Google Scholar
  17. L. Breiman, “Random Forests,” Machine Learning, vol. 45, no. 1, pp. 5–32, 2001. View at: Publisher Site | Google Scholar
  18. J. R. Quinlan, “Induction of decision trees,” Machine Learning, vol. 1, no. 1, pp. 81–106, 1986. View at: Publisher Site | Google Scholar
  19. T. Chen and C. Guestrin, “XGBoost: a scalable tree boosting system,” in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 2016. View at: Publisher Site | Google Scholar
  20. P. A. Banks, T. L. Bollen, C. Dervenis et al., “Classification of acute pancreatitis--2012: revision of the Atlanta classification and definitions by international consensus,” Gut, vol. 62, no. 1, pp. 102–111, 2012. View at: Publisher Site | Google Scholar
  21. V. Bewick, L. Cheek, and J. Ball, “Statistics review 14: logistic regression,” Critical Care, vol. 9, no. 1, pp. 112–118, 2005. View at: Publisher Site | Google Scholar
  22. C. Cortes and V. Vapnik, “Support-vector networks,” Machine Learning, vol. 20, no. 3, pp. 273–297, 1995. View at: Publisher Site | Google Scholar
  23. J. H. Friedman, “Greedy function approximation: A gradient boosting machine,” The Annals of Statistics, vol. 29, no. 5, pp. 1189–1232, 2001. View at: Publisher Site | Google Scholar
  24. G. Pupelis, “Renal failure in acute pancreatitis. Timing of dialysis and surgery,” Przeglad Lekarski, vol. 5, Supplement 5, pp. 29–31, 2000. View at: Google Scholar
  25. R. Bonegio and W. Lieberthal, “Role of apoptosis in the pathogenesis of acute renal failure,” Current Opinion in Nephrology and Hypertension, vol. 11, no. 3, pp. 301–308, 2003. View at: Publisher Site | Google Scholar
  26. G. I. Papachristou, “Prediction of severe acute pancreatitis: current knowledge and novel insights,” World Journal of Gastroenterology, vol. 14, no. 41, pp. 6273–6275, 2008. View at: Publisher Site | Google Scholar
  27. P. A. Banks and M. L. Freeman, “Practice guidelines in acute pancreatitis,” The American Journal of Gastroenterology, vol. 101, no. 10, pp. 2379–2400, 2006. View at: Publisher Site | Google Scholar
  28. M. Larvin, “Assessment of severity and prognosis in acute pancreatitis,” European Journal of Gastroenterology & Hepatology, vol. 9, no. 2, pp. 122–130, 1997. View at: Publisher Site | Google Scholar
  29. B. G. Kim, M. H. Noh, C. H. Ryu et al., “A comparison of the BISAP score and serum procalcitonin for predicting the severity of acute pancreatitis,” The Korean Journal of Internal Medicine, vol. 28, no. 3, pp. 322–329, 2003. View at: Publisher Site | Google Scholar
  30. Z. Jin, L. Xu, X. Wang, and D. Yang, “Risk factors for worsening of acute pancreatitis in patients admitted with mild acute pancreatitis,” Medical Science Monitor, vol. 23, pp. 1026–1032, 2017. View at: Publisher Site | Google Scholar
  31. D. Matthew, D. Oxman, K. Djekidel, Z. Ahmed, and M. Sherman, “Abdominal compartment syndrome and acute kidney injury due to excessive auto-positive end-expiratory pressure,” American Journal of Kidney Diseases, vol. 61, no. 2, pp. 285–288, 2013. View at: Publisher Site | Google Scholar
  32. A. Z. Al-Bahrani, G. H. Abid, A. Holt et al., “Clinical relevance of intra-abdominal hypertension in patients with severe acute pancreatitis,” Pancreas, vol. 36, no. 1, pp. 39–43, 2008. View at: Publisher Site | Google Scholar
  33. J. J. De Waele, E. Hoste, S. I. Blot, J. Decruyenaere, and F. Colardyn, “Intra-abdominal hypertension in patients with severe acute pancreatitis,” Critical Care, vol. 9, no. 4, pp. R452–R457, 2005. View at: Publisher Site | Google Scholar
  34. J. M. H. Rosas, S. N. Soto, J. S. Aracil et al., “Intra-abdominal pressure as a marker of severity in acute pancreatitis,” Surgery, vol. 141, no. 2, pp. 173–178, 2007. View at: Publisher Site | Google Scholar
  35. W.-D. Li, L. Jia, Y. Ou, Y. X. Huang, and S. M. Jiang, “Surveillance of intra-abdominal pressure and intestinal barrier function in a rat model of acute necrotizing pancreatitis and its potential early therapeutic window,” PLoS One, vol. 8, no. 11, article e78975, 2013. View at: Publisher Site | Google Scholar
  36. G. Pupelis, E. Austrums, K. Snippe, and M. Berzins, “Clinical significance of increased intraabdominal pressure in severe acute pancreatitis,” Acta Chirurgica Belgica, vol. 102, no. 2, pp. 71–74, 2016. View at: Publisher Site | Google Scholar
  37. M. K. Goenka, U. Goenka, S. Afzalpurkar, S. C. Tiwari, R. Agarwal, and I. K. Tiwary, “Role of static and dynamic intra-abdominal pressure monitoring in acute pancreatitis,” Pancreas, vol. 49, no. 5, pp. 663–667, 2020. View at: Publisher Site | Google Scholar
  38. N. K. Tran, S. Sen, T. L. Palmieri et al., “Artificial intelligence and machine learning for predicting acute kidney injury in severely burned patients: a proof of concept,” Burns, vol. 45, no. 6, pp. 1350–1358, 2019. View at: Publisher Site | Google Scholar
  39. H. C. Lee, S. Yoon, S. M. Yang et al., “Prediction of acute kidney injury after liver transplantation: machine learning approaches vs. logistic regression model,” Journal of Clinical Medicine, vol. 7, no. 11, p. 428, 2018. View at: Publisher Site | Google Scholar
  40. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015. View at: Publisher Site | Google Scholar
  41. D. D. Tran, P. L. Oe, C. W. H. De Fijter, J. Van der Meulen, and M. A. Cuesta, “Acute renal failure in patients with acute pancreatitis: prevalence, risk factors, and outcome,” Nephrology, Dialysis, Transplantation, vol. 8, no. 10, pp. 1079–1084, 1993. View at: Publisher Site | Google Scholar
  42. S. Shah, A. C. Leonard, K. Harrison, K. Meganathan, A. L. Christianson, and C. V. Thakar, “Mortality and recovery associated with kidney failure due to acute kidney injury,” Clinical Journal of the American Society of Nephrology, vol. 15, no. 7, pp. 995–1006, 2020. View at: Publisher Site | Google Scholar
  43. O. Gajic, O. Dabbagh, P. K. Park et al., “Early identification of patients at risk of acute lung injury: evaluation of lung injury prediction score in a multicenter cohort study,” American Journal of Respiratory and Critical Care Medicine, vol. 183, no. 4, pp. 462–470, 2011. View at: Publisher Site | Google Scholar

Copyright © 2020 Cheng Qu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.