Research Article

Comparative Analysis of Recent Architecture of Convolutional Neural Network

Table 3

Architectural issues.

S.NOArchitectureBenefitsWeakness

1LeNetAutomatic learning of features reduces parameters and computation costs.Large size filters; improper scaling of diverse classes of images.
2AlexNetEvery type of feature extraction uses large as well as low-level filter sizes.In the first and second layers, neurons are inactive.
3ZfNetParameter tuning is introduced by visualizing the output of the intermediate layer.Additional processing may be required for visualization.
4VGGThe concept of the receptive field is introduced here. Proposes simple and homogenous topology.Fully connected layers use high computational power.
5Google NetMultiscale filters are introduced. Dimensionality reduction is made by bottleneck layer.Some information may be lost due to the application of the bottleneck layer.
6Inception V3Asymmetric filters and bottleneck are introduced for reducing the computation cost.Designed architecture is complex.
7Highway networksA new training mechanism is introduced.Complex design.
8Inception V4Multilevel feature extraction.Architecture design which is complex may add up computation costs.
9ResNetError rate is decreased, and residual learning concept is introduced.Over-adaptation of hyperparameters for a specific task due to stacking of modules.
10Deluge Netā€‰ā€‰
11XceptionSeparable convolution is introduced. Application of cardinality for learning good abstraction.High computational cost.
12ResNeXtApplication of cardinality in each layer for diverse transformation, and grouped convolution is applied.High computational power.
13Dense NetEnsures the maximum flow of data in layers, and redundant features are avoided in learning.An increase in feature maps increases parameters.
14Convolution block attention moduleApplies global average and max pooling concept simultaneously and improves flow of information.The computational load may increase.