Find in Library
Search millions of books, articles, and more
Indexed Open Access Databases
Lessons Learned From the Training of GANs on Artificial Datasets
oleh: Shichang Tang
| Format: | Article |
|---|---|
| Diterbitkan: | IEEE 2020-01-01 |
Deskripsi
Generative Adversarial Networks (GANs) have made great progress in synthesizing realistic images in recent years. However, they are often trained on image datasets with either too few samples or too many classes belonging to different data distributions. Consequently, GANs are prone to underfitting or overfitting, making the analysis of them difficult and constrained. Therefore, in order to conduct a thorough study on GANs while obviating unnecessary interferences introduced by the datasets, we train them on artificial datasets where there are infinitely many samples and the real data distributions are simple, high-dimensional and have structured manifolds. Moreover, the generators are designed such that optimal sets of parameters exist. Empirically, we find that under various distance measures, the generator fails to learn such parameters with the GAN training procedure. In addition, we confirm that using mixtures of GANs is more beneficial than increasing the network depth or width when the model complexity is high enough. The benefit is partially due to the division of the generation and discrimination tasks across multiple generators and discriminators. We find that a mixture of generators can discover different modes or different classes automatically in the unsupervised setting. As an example of the generalizability of our conclusions to realistic datasets, we train a mixture of GANs on the CIFAR-10 dataset and our method significantly outperforms the state-of-the-art in terms of popular metrics, i.e., Inception Score (IS) and Fréchet Inception Distance (FID).