| Step | Parameters | Tested values | Selected value | Note |
| 1 | Optimization method | RMSProp, Adam, Adadelta | Adam | Initial learning rate was set as 0.003 for RMSProp [23], Adam [24], and 1.0 for Adadelta [25] | 2 | Minibatch size | 50, 100, 200 | 200 | ā | 3 | Dropout ratio (%) | 0, 10, 30, 50 | 0 | Dropout ratio means the rate at which the output gate units in a LSTM layer are randomly removed | 4 | LSTM layers | 1, 2 | 2 | The model with 2 LSTM layers showed higher accuracy than the model of 1 LSTM layer | 5 | Fully connected layer | 0, 1 | 1 | The model with the fully connected layer had a higher accuracy than the model with no fully connected layer | 6 | Input features | 3, 5 | 3 | Higher accuracy was obtained when using three features, RF, PRI, and PW, instead of using 5 features, AOA, AMP, RF, PRI, and PW | 7 | Decay ratio | Use, no use | Use | When gradually decreasing the learning rate by multiplying 0.9 to the previous learning rate per epoch after epoch 10, higher accuracy was obtained |
|
|