Find in Library
Search millions of books, articles, and more
Indexed Open Access Databases
InjectionGAN: Unified Generative Adversarial Networks for Arbitrary Image Attribute Editing
oleh: Chen Ding, Wei Kang, Jiaqi Zhu, Shuangyan Du
| Format: | Article |
|---|---|
| Diterbitkan: | IEEE 2020-01-01 |
Deskripsi
Existing image-to-image translation methods usually incorporate encoder-decoder and generative adversarial networks to generate images. The encoder compresses an entire image into a static representation using a sequence of convolution layers until a bottleneck, and then, the intermediate features are decoded to the target image. However, the existence of bottleneck layer in those approaches still has limitations in the sharpness of details, distinct image translation and identity preservation, since different domain translations may be related to the global or local region in the input image. To address these issues, we propose a new model, InjectionGAN, based on a novel generative adversarial network (GAN) for arbitrary attribute transfer. Specifically, conditional on the target domain label, an auto-encoder-like network with multiple linear transformation and refinement connections are trained to translate the input image into the target domain. The connections block better shuttle the low-level information in the encoder to the decoder, which helps to preserve the structural information while modify the appearance slightly at the pixel level through adversarial training. The results on two popular datasets suggest that InjectionGAN achieves a better performance.