Research Article

Novel Crow Swarm Optimization Algorithm and Selection Approach for Optimal Deep Learning COVID-19 Diagnostic Model

Table 2

Parameters of the COVID-19 deep learning models.

Model no.Deep learning modelTuning parameters

1CNNMomentum = 0.5 to 0.9; number of epochs = 0.9; batch size = 32.
2DarkNetbatch = 64; momentum = 0.9; learning_rate = 0.000008.
3DNNbatch_size = c (32, 64), dropout_rate = c (0.1, 0.2, 0.3), units = c (10, 20).
4GoogleNetEach one must be resized from 647 × 511 × 3 to 227 × 227 × 3 pixels, the dimensions used to train GoogleNet 224 × 224 × 3 pixels.
5InceptionResNetV2Outputs = Dense (100, activation = 'softmax') (base_model.output) model = Model (base_model.inputs, outputs).
6Inceptionv3batch_size = c 64, dropout_rate = c(0.1, 0.2, 0.3), units = c (10,20, 30).
7LSTMRule search (evaluation measure) = entropy; minimum rule coverage = 2, maximum rule length = 6.
8MobileNetV2learning_rate = 0.0001; no. of epochs = 10.
9NASNet-largelearning_rate = 0.0002; no. of epochs = 20.
10ResNet34Optimization method: Adam; momentum: 0.90; weight-decay: 0.0006; dropout: 0.6; batch size: 100; learning rate: 0.02; total no. of epochs: 20.
11ResNet50Optimization method: Adam; momentum: 0.97; weight-decay: 0.0005; dropout: 0.7; batch size: 100; learning rate: 0.03; total no. of epochs: 30.
12SAEbatch_size = c (64), dropout_rate = c (0.1, 0.2, 0.4), units = c (10, 20, 40).
13VGG16Optimization method: SGD; momentum: 0.90; weight-decay: 0.0004; dropout: 0.6; batch size: 164; learning rate: 0.06; total no. of epochs: 60.
14VGG19Optimization method: SGD; momentum: 0.97; weight-decay: 0.0005; dropout: 0.3; batch size: 128; learning rate: 0.07; total no. of epochs: 40.
15XceptionOptimizer method: SGD; momentum: 0.8; learning rate: 0.035; learning rate decay: decay of rate 0.92 every 4 epochs.