Input: Datasets(train_loader, valid_loader, test_loader), |
Learning rate, Weight decay, Epochs, Batch size. |
Output: The predictive value of the test samples and the prediction performance of the model. |
1: Training process: |
2: Define the model: Three layers of GCN (two hidden layers and one output layer) |
3: Load the datasets: train_loader, valid_loader |
4: # model.train() |
5: for to Epochs do: |
6: All data is propagated forward, and the activation function uses Leaky ReLU |
7: Use cross-entropy loss function to calculate the loss value |
8: Clear the gradient: optimizer.zero_grad() |
9: Backpropagate and calculate the gradient of the parameter |
10: Use Adam optimizer to update the gradient: optimizer.step() |
11: Calculate the accuracy of the current model on the training set |
12: Calculate the accuracy of the current model on validation set |
13: end for |
14: Test process: |
15: # model.test() |
16: Load the trained model |
17: Load the dataset: test_loader |
18: Calculate the predicted value y_pred of the test samples |
19: Evaluate the model performance: Precision, Recall, F1-score, AUC |