|
S.NO | Architecture | Benefits | Weakness |
|
1 | LeNet | Automatic learning of features reduces parameters and computation costs. | Large size filters; improper scaling of diverse classes of images. |
2 | AlexNet | Every type of feature extraction uses large as well as low-level filter sizes. | In the first and second layers, neurons are inactive. |
3 | ZfNet | Parameter tuning is introduced by visualizing the output of the intermediate layer. | Additional processing may be required for visualization. |
4 | VGG | The concept of the receptive field is introduced here. Proposes simple and homogenous topology. | Fully connected layers use high computational power. |
5 | Google Net | Multiscale filters are introduced. Dimensionality reduction is made by bottleneck layer. | Some information may be lost due to the application of the bottleneck layer. |
6 | Inception V3 | Asymmetric filters and bottleneck are introduced for reducing the computation cost. | Designed architecture is complex. |
7 | Highway networks | A new training mechanism is introduced. | Complex design. |
8 | Inception V4 | Multilevel feature extraction. | Architecture design which is complex may add up computation costs. |
9 | ResNet | Error rate is decreased, and residual learning concept is introduced. | Over-adaptation of hyperparameters for a specific task due to stacking of modules. |
10 | Deluge Net | ā | ā |
11 | Xception | Separable convolution is introduced. Application of cardinality for learning good abstraction. | High computational cost. |
12 | ResNeXt | Application of cardinality in each layer for diverse transformation, and grouped convolution is applied. | High computational power. |
13 | Dense Net | Ensures the maximum flow of data in layers, and redundant features are avoided in learning. | An increase in feature maps increases parameters. |
14 | Convolution block attention module | Applies global average and max pooling concept simultaneously and improves flow of information. | The computational load may increase. |
|