|
Model no. | Deep learning model | Tuning parameters |
|
1 | CNN | Momentum = 0.5 to 0.9; number of epochs = 0.9; batch size = 32. |
2 | DarkNet | batch = 64; momentum = 0.9; learning_rate = 0.000008. |
3 | DNN | batch_size = c (32, 64), dropout_rate = c (0.1, 0.2, 0.3), units = c (10, 20). |
4 | GoogleNet | Each one must be resized from 647 × 511 × 3 to 227 × 227 × 3 pixels, the dimensions used to train GoogleNet 224 × 224 × 3 pixels. |
5 | InceptionResNetV2 | Outputs = Dense (100, activation = 'softmax') (base_model.output) model = Model (base_model.inputs, outputs). |
6 | Inceptionv3 | batch_size = c 64, dropout_rate = c(0.1, 0.2, 0.3), units = c (10,20, 30). |
7 | LSTM | Rule search (evaluation measure) = entropy; minimum rule coverage = 2, maximum rule length = 6. |
8 | MobileNetV2 | learning_rate = 0.0001; no. of epochs = 10. |
9 | NASNet-large | learning_rate = 0.0002; no. of epochs = 20. |
10 | ResNet34 | Optimization method: Adam; momentum: 0.90; weight-decay: 0.0006; dropout: 0.6; batch size: 100; learning rate: 0.02; total no. of epochs: 20. |
11 | ResNet50 | Optimization method: Adam; momentum: 0.97; weight-decay: 0.0005; dropout: 0.7; batch size: 100; learning rate: 0.03; total no. of epochs: 30. |
12 | SAE | batch_size = c (64), dropout_rate = c (0.1, 0.2, 0.4), units = c (10, 20, 40). |
13 | VGG16 | Optimization method: SGD; momentum: 0.90; weight-decay: 0.0004; dropout: 0.6; batch size: 164; learning rate: 0.06; total no. of epochs: 60. |
14 | VGG19 | Optimization method: SGD; momentum: 0.97; weight-decay: 0.0005; dropout: 0.3; batch size: 128; learning rate: 0.07; total no. of epochs: 40. |
15 | Xception | Optimizer method: SGD; momentum: 0.8; learning rate: 0.035; learning rate decay: decay of rate 0.92 every 4 epochs. |
|