Find in Library
Search millions of books, articles, and more
Indexed Open Access Databases
Invertible Autoencoder for Domain Adaptation
oleh: Yunfei Teng, Anna Choromanska
Format: | Article |
---|---|
Diterbitkan: | MDPI AG 2019-03-01 |
Deskripsi
The unsupervised image-to-image translation aims at finding a mapping between the source (<inline-formula> <math display="inline"> <semantics> <mi mathvariant="script">A</mi> </semantics> </math> </inline-formula>) and target (<inline-formula> <math display="inline"> <semantics> <mi mathvariant="script">B</mi> </semantics> </math> </inline-formula>) image domains, where in many applications aligned image pairs are not available at training. This is an ill-posed learning problem since it requires inferring the joint probability distribution from marginals. Joint learning of coupled mappings <inline-formula> <math display="inline"> <semantics> <mrow> <msub> <mi mathvariant="script">F</mi> <mrow> <mi mathvariant="script">A</mi> <mi mathvariant="script">B</mi> </mrow> </msub> <mo>:</mo> <mi mathvariant="script">A</mi> <mo>→</mo> <mi mathvariant="script">B</mi> </mrow> </semantics> </math> </inline-formula> and <inline-formula> <math display="inline"> <semantics> <mrow> <msub> <mi mathvariant="script">F</mi> <mrow> <mi mathvariant="script">B</mi> <mi mathvariant="script">A</mi> </mrow> </msub> <mo>:</mo> <mi mathvariant="script">B</mi> <mo>→</mo> <mi mathvariant="script">A</mi> </mrow> </semantics> </math> </inline-formula> is commonly used by the state-of-the-art methods, like CycleGAN to learn this translation by introducing cycle consistency requirement to the learning problem, i.e., <inline-formula> <math display="inline"> <semantics> <mrow> <msub> <mi mathvariant="script">F</mi> <mrow> <mi mathvariant="script">A</mi> <mi mathvariant="script">B</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi mathvariant="script">F</mi> <mrow> <mi mathvariant="script">B</mi> <mi mathvariant="script">A</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi mathvariant="script">B</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>≈</mo> <mi mathvariant="script">B</mi> </mrow> </semantics> </math> </inline-formula> and <inline-formula> <math display="inline"> <semantics> <mrow> <msub> <mi mathvariant="script">F</mi> <mrow> <mi mathvariant="script">B</mi> <mi mathvariant="script">A</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi mathvariant="script">F</mi> <mrow> <mi mathvariant="script">A</mi> <mi mathvariant="script">B</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi mathvariant="script">A</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>≈</mo> <mi mathvariant="script">A</mi> </mrow> </semantics> </math> </inline-formula>. Cycle consistency enforces the preservation of the mutual information between input and translated images. However, it does not explicitly enforce <inline-formula> <math display="inline"> <semantics> <msub> <mi mathvariant="script">F</mi> <mrow> <mi mathvariant="script">B</mi> <mi mathvariant="script">A</mi> </mrow> </msub> </semantics> </math> </inline-formula> to be an inverse operation to <inline-formula> <math display="inline"> <semantics> <msub> <mi mathvariant="script">F</mi> <mrow> <mi mathvariant="script">A</mi> <mi mathvariant="script">B</mi> </mrow> </msub> </semantics> </math> </inline-formula>. We propose a new deep architecture that we call <i>invertible autoencoder (InvAuto)</i> to explicitly enforce this relation. This is done by forcing an encoder to be an inverted version of the decoder, where corresponding layers perform opposite mappings and share parameters. The mappings are constrained to be orthonormal. The resulting architecture leads to the reduction of the number of trainable parameters (up to 2 times). We present image translation results on benchmark datasets and demonstrate state-of-the art performance of our approach. Finally, we test the proposed domain adaptation method on the task of road video conversion. We demonstrate that the videos converted with InvAuto have high quality and show that the NVIDIA neural-network-based end-to-end learning system for autonomous driving, known as PilotNet, trained on real road videos performs well when tested on the converted ones.