I have been on hiatus for a couple of weeks, but I have been running some experiments in the meantime so I have a couple of posts lined up. Initially, I wanted to explore the idea of using the DRAW model. I was thinking of augmenting it by feeding the caption words at each time step during the generation, so the model would hopefully learn to draw each word as it is inputted. Perhaps unsurprisingly, it turns out that this idea has already been explored and given rise to the AlignDRAW model.

In the paper, the authors show that it can be a successful approach to generating images from captions although it does lead to slightly blurry images. They resort to a post-processing step to sharpen the generated images using a GAN.

Since it seemed like GANs are currently the best approach for generative models of images, I hopped onto the GAN bandwagon.

For my first attempt with GANs, I tried implementing the DCGAN using these tricks and tips. I trained the model on the center 32×32 images to see if it might work out-of-the-box.

After a few unsuccessful attempts and seeing many brown squares, I looked for some alternatives. And so it BEGAN…

*Bounded Equilibrium Generative Adversarial Network (BEGAN)*

BEGAN is a recently proposed GAN variant which promises easier training without having to carefully balance the discriminator and generator networks’ progresses.

It differs form a regular GAN in several respects:

First, instead of using a discriminator which tries to output 0 for fake images and 1 for real images, the discriminator is now an autoencoder. It’s goal is to reconstruct a real image as well as possible but reconstruct fake images as poorly as possible. Similar to regular GANs, the generator’s goal is to fool the discriminator into having low autoencoder loss on generated images.

Second, a balancing variable is introduced to automatically keep the generator and discriminator strengths’ in check. This is achieved by modifying the discriminator’s loss dynamically during training to put more emphasis on real images or on fake images as required. Concretely, the objectives for the discriminator and generator respectively are:

The model tries to maintain an equilibrium between the autoencoder losses on real and fake images:

This is done by updating the equilibrium parameter and updating it as follows:

The authors claim that these changes allows BEGAN to be trained smoothly without many tricks such as a batch-norm and choosing different activation functions for the discriminator and generator.

After trying it myself, I have to agree; it seems to just work. The architecture I used is essentially the same as the one presented in the paper although it is a bit smaller due to memory constraints. I used a learning rate of 0.0001, , the ADAM optimizer, ELU activations (except fully-connected layers which have linear activations) and batch normalization on all the layers other than the output.

Generator - Input (64 dim noise) - FC (4096 units) - Reshape into 256x4x4 - 3x3 Conv (64 filters) - 3x3 Conv (64) - Upsample (Nearest neighbor) - 3x3 Conv (64) - 3x3 Conv (64) - Upsample - 3x3 Conv (64) - 3x3 Conv (64) - Upsample - 3x3 Conv (64) - 3x3 Conv (64) - 3x3 Locally-connected (Output 3x32x32)

Discriminator - Input (3x32x32) - 3x3 Conv (64 filters) - 3x3 Conv (64) - 3x3 Conv (96) (Stride 2) - 3x3 Conv (96) - 3x3 Conv (96) - 3x3 Conv (128) (Stride 2) - 3x3 Conv (128) - 3x3 Conv (128) - 3x3 Conv (192) (Stride 2) - 3x3 Conv (192) - 3x3 Conv (192) - 3x3 Conv (256) (Stride 2) - 3x3 Conv (256) - 3x3 Conv (256) - FC (4096 units) - Reshape into 256x4x4 - 3x3 Conv (64) - 3x3 Conv (64) - Upsample (Nearest neighbor) - 3x3 Conv (64) - 3x3 Conv (64) - Upsample - 3x3 Conv (64) - 3x3 Conv (64) - Upsample - 3x3 Conv (64) - 3x3 Conv (64) - 3x3 Conv (3) (Output 3x32x32)

After a short training run of 30,000 iterations, here are some results:

Although these samples are definitely not realistic by any stretch of the imagination, we can see that the model is making progress. It is a bit interesting that the model seems to be falling into certain modes, with groups of images looking similar to each other. Also, there are a number of artifacts on the images (black splotches). Training the model for a longer period of time would definitely bring better results. In the original paper, they trained for around 200,000 iterations, much more than this quick run.

For the next post, we will be moving on to the main task: inpainting using BEGAN.