Research Article
Comparative Analysis of Recent Architecture of Convolutional Neural Network
Table 2
Comparison of CNN architecture.
| S.NO | Architecture | Year | Major contribution | Parameters | Depth | Reference |
| 1 | LeNet | 1998 | Initial CNN architecture | 0.060 M | 5 | [15] | 2 | AlexNet | 2012 | Deeper than LeNet and uses relu and overlapping pooling | 60M | 8 | [20] | 3 | ZfNet | 2014 | Visualizing intermediate layer | 60 M | 18 | [21] | 4 | VGG | 2014 | Uses small kernel size | 4M | 19 | [22] | 5 | Google Net | 2015 | Introduced block concept, split transform, and merge idea | 23.6 M | 22 | [23] | 6 | Inception V3 | 2015 | Bottleneck issue is sorted and small filter size is added | 23.6 M | 159 | [24] | 7 | Highway networks | 2015 | Introduced multipath concept | 2.3 M | 19 | [25] | 8 | Inception V4 | 2016 | Split transform and merge idea | 35M | 70 | [26] | 9 | ResNet | 2016 | Residual learning | 25.6 | 152 | [27] | 10 | Deluge Net | 2016 | Allows cross layer information | 20.2 M | 146 | [28] | 11 | Xception | 2017 | Depth-wise convolution followed by a point-wise convolution | 8.6 M | 452 | [29] | 12 | ResNeXt | 2017 | Cardinally homogenous topology grouped convolution | 27.5 M | 152 | [30] | 13 | Dense Net | 2017 | Cross-layered information flow | 25.6 M | 190 | [31] | 14 | Channel boosted CNN | 2018 | Boosting of the original channel with additional artificial channels | — | — | [32] | 15 | Convolution block attention module | 2018 | Feature-map attention | — | — | [33] |
|
|