Review Article
Deep CNN and Deep GAN in Computational Visual Perception-Driven Image Analysis
Table 7
Comparisons of images generated by D-GAN variants.
| Type of D-GAN | Epoch 10 | Training losses | Epoch 100 | Training losses | Characteristics | Shortcomings |
| D-GAN | | | | | Generator-discriminator framework via a minimax game where samples are directly generated. | The model parameters oscillate, destabilize, and never converge. | DCGAN | | | | | It uses convolutional stride and transposed convolution for the downsampling and the upsampling. | Gradients disappear or explode. | CGAN | | | | | Conditional generation of images. | CGAN is not strictly unsupervised. Some labeling is required for it to work strictly. | LSGAN | | | | | It creates high-quality images compared to the GAN. It is more stable during training. | Additional computational cost. | WGAN | | | | | Stability of learning and overcomes the mode collapse problem. | The difficulty in the WGAN is to enforce the Lipschitz constraint. |
|
|