Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks
Jun-Yan Zhu, Taesung Park, Phillip Isola, Alexei A. Efros
Abstract
任务:Image-to-image translation(learn the mapping between an input image and an output image using a training set of aligned image pairs)
困难:paired training data will not be available
解决方案:learning to translate an image from a source domain XX to a target domain
in the absence of paired examples
具体方法:learn a mapping G:X→YG:X→Y such that the distribution of images from G(x)G(x) is indistinguishable from the distribution YY using an adversarial loss. 但是直接这么做约束太弱了,于是加了另外一个约束
使得FF是
的反函数,即F(G(x))≈G(F(x))≈xF(G(x))≈G(F(x))≈x。这就是CycleGAN的基本思想(后来ICLR2018的一篇用GAN在隐空间生成对抗样本的工作和这个很像)
Introduction
开头两段很有文采~
为了避免image translation中搜集标记数据集的困难,可以直接进行集合层面上而不是元素层面上的匹配,即domain mapping。进行domain mapping,就是要找到一个