157. CycleGAN

CycleGAN

Today I’ve learned about CycleGAN, so I’d like it here.

Before this research, Image-to-Image translation tasks(Learning how to map an input image to a different style image) required “PAIR” data sets for training. Unfortunately, in most cases, you don’t have those pair images. CycleGAN tackles that challenge.

Combining Losses

CycleGAN consists of 2 losses.

Adversarial loss: The generator will try to generate an image close to the desired image and the discriminator will try to distinguish whether the generated image is real or not.

Cycle consistency loss: Using adversarial loss alone cannot guarantee whether the generator is actually mapping the input to the desired output. So, we add another function F, besides the generator, which tries to map the generated output back to the original input, hence the term CycleGAN. The restored input should be the same as the original input, so by comparing these 2 and trying to minimize the difference, we can make sure that the generator is doing its job.