Creating a DCGAN with Tensorflow for Generating Human Faces | Generative Modeling

Posted by


Generative adversarial networks (GANs) are a powerful type of neural network architecture that are used for generating new data samples. In this tutorial, we will build a deep convolutional GAN (DCGAN) for generating realistic human faces using Tensorflow. DCGANs are a type of GAN that use convolutional neural networks as both the generator and the discriminator.

Here are the steps we will cover in this tutorial:

  1. Understanding the DCGAN architecture
  2. Preparing the dataset
  3. Building the generator and discriminator models
  4. Training the DCGAN
  5. Generating new human faces

Step 1: Understanding the DCGAN architecture
DCGANs consist of two neural networks – a generator and a discriminator. The generator takes random noise as input and generates new images, while the discriminator tries to distinguish between real images from the dataset and fake images generated by the generator. The two networks are trained in an adversarial manner, where the generator tries to fool the discriminator and the discriminator tries to become better at distinguishing real from fake images.

The generator network consists of multiple convolutional layers followed by batch normalization and ReLU activation functions. The discriminator network also consists of multiple convolutional layers followed by batch normalization, LeakyReLU activation functions, and a final sigmoid activation function to output a probability score.

Step 2: Preparing the dataset
For this tutorial, we will use the CelebA dataset, which contains over 200,000 celebrity faces. You can download the dataset from the following link:
https://www.kaggle.com/jessicali9530/celeba-dataset

Once you have downloaded the dataset, you can extract the images and store them in a folder called "celeba" in your working directory.

Step 3: Building the generator and discriminator models
We will now build the generator and discriminator models using Tensorflow. Here is the code for building the generator model:

import tensorflow as tf

def build_generator():
    model = tf.keras.models.Sequential()
    model.add(tf.keras.layers.Dense(7*7*256, input_shape=(100,)))
    model.add(tf.keras.layers.Reshape((7, 7, 256)))

    model.add(tf.keras.layers.Conv2DTranspose(128, (5, 5), strides=(1, 1), padding='same'))
    model.add(tf.keras.layers.BatchNormalization())
    model.add(tf.keras.layers.ReLU())

    model.add(tf.keras.layers.Conv2DTranspose(64, (5, 5), strides=(2, 2), padding='same'))
    model.add(tf.keras.layers.BatchNormalization())
    model.add(tf.keras.layers.ReLU())

    model.add(tf.keras.layers.Conv2DTranspose(3, (5, 5), strides=(2, 2), padding='same', activation='tanh'))

    return model

And here is the code for building the discriminator model:

def build_discriminator():
    model = tf.keras.models.Sequential()
    model.add(tf.keras.layers.Conv2D(64, (5, 5), strides=(2, 2), padding='same', input_shape=(64, 64, 3)))
    model.add(tf.keras.layers.LeakyReLU())

    model.add(tf.keras.layers.Conv2D(128, (5, 5), strides=(2, 2), padding='same'))
    model.add(tf.keras.layers.BatchNormalization())
    model.add(tf.keras.layers.LeakyReLU())

    model.add(tf.keras.layers.Flatten())

    model.add(tf.keras.layers.Dense(1, activation='sigmoid'))

    return model

Step 4: Training the DCGAN
We will now train the DCGAN on the CelebA dataset. Here is the code for training the DCGAN:

# Set up the GAN
generator = build_generator()
discriminator = build_discriminator()

generator_optimizer = tf.keras.optimizers.Adam(learning_rate=0.0002, beta_1=0.5)
discriminator_optimizer = tf.keras.optimizers.Adam(learning_rate=0.0002, beta_1=0.5)

def generator_loss(fake_output):
    return tf.losses.binary_crossentropy(tf.ones_like(fake_output), fake_output)

def discriminator_loss(real_output, fake_output):
    real_loss = tf.losses.binary_crossentropy(tf.ones_like(real_output), real_output)
    fake_loss = tf.losses.binary_crossentropy(tf.zeros_like(fake_output), fake_output)
    return real_loss + fake_loss

@tf.function
def train_step(images):
    noise = tf.random.normal([BATCH_SIZE, 100])

    with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
        generated_images = generator(noise, training=True)

        real_output = discriminator(images, training=True)
        fake_output = discriminator(generated_images, training=True)

        gen_loss = generator_loss(fake_output)
        disc_loss = discriminator_loss(real_output, fake_output)

    gradients_of_generator = gen_tape.gradient(gen_loss, generator.trainable_variables)
    gradients_of_discriminator = disc_tape.gradient(disc_loss, discriminator.trainable_variables)

    generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.trainable_variables))
    discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.trainable_variables))

Step 5: Generating new human faces
Once the DCGAN has been trained, we can generate new human faces by sampling random noise and passing it through the generator network. Here is the code for generating new human faces:

import matplotlib.pyplot as plt

def generate_faces(n):
    noise = tf.random.normal([n, 100])
    generated_faces = generator(noise, training=False)

    plt.figure(figsize=(10, 10))

    for i in range(n):
        plt.subplot(4, 4, i+1)
        plt.imshow(generated_faces[i])
        plt.axis('off')

    plt.show()

You can use the generate_faces function to generate new human faces by passing the number of faces you want to generate as an argument.

That’s it! You have successfully built a DCGAN for generating human faces using Tensorflow. Experiment with different hyperparameters, architectures, and datasets to create even more realistic and diverse human faces. Happy coding!

0 0 votes
Article Rating

Leave a Reply

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x