AOT-GAN Experiments

May 2021 · 1 min read

I conducted various experiments with AOT-GAN proposed in the paper: Aggregated Contextual Transformations for High-Resolution Image Inpainting) on the Places2 dataset using the PConv Free Form Masks for Inpainting.

In particular, I focused on finding the effectivity of different losses while training the framework. I made several observations:

  • The adversarial loss doesn’t seem to be contributing to the learning of the model as it stays almost the same throughout the training.

  • Training for longer than 1e4 iterations doesn’t add much improvement to the results.

  • Training without style loss produces blurry results. Therefore, style loss is an important component for texture related synthesis of images.

  • Training without adversarial loss also produces good quality results!

For more info, please see the Code on GitHub.