Machine Learning | Implementing Variational Inference with Normalizing Flows using PyTorch with the following 100 lines of code:

Posted by


Machine learning is a branch of artificial intelligence that aims to develop algorithms and techniques that allow computers to learn from data. One popular approach to machine learning is variational inference with normalizing flows. Variational inference is a technique used in probabilistic modeling to approximate complex probability distributions, while normalizing flows are a class of generative models that use invertible transformations to map a simple distribution to a more complex one.

In this tutorial, we will create a simple variational inference model using PyTorch to approximate a target distribution with a normalizing flow. We will start by defining the target distribution, then implement the normalizing flow and the variational inference algorithm. Finally, we will train the model and evaluate its performance.

First, let’s set up the environment and import the necessary libraries:

import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np
import matplotlib.pyplot as plt

Next, we will define the target distribution that we want to approximate. In this example, we will use a simple Gaussian distribution with mean 0 and standard deviation 1:

def target_distribution(x):
    return torch.exp(-0.5 * x**2) / np.sqrt(2 * np.pi)

Now, we will implement the normalizing flow, which is a sequence of invertible transformations applied to a simple base distribution. We will use a planar flow, which is a popular choice for its simplicity and efficiency:

class PlanarFlow(nn.Module):
    def __init__(self, dim):
        super(PlanarFlow, self).__init__()
        self.u = nn.Parameter(torch.randn(dim))
        self.w = nn.Parameter(torch.randn(dim))
        self.b = nn.Parameter(torch.randn(1))

    def forward(self, z):
        inner = torch.sum(self.w * z, dim=1, keepdim=True) + self.b
        f_z = z + self.u * torch.tanh(inner)
        psi = self.w * (1 - torch.tanh(inner)**2)
        log_det = torch.log(torch.abs(1 + torch.sum(psi * self.u, dim=1, keepdim=True)))

        return f_z, log_det

Next, we will define the variational inference algorithm, which optimizes the parameters of the normalizing flow to match the target distribution:

class VariationalInference(nn.Module):
    def __init__(self, dim, num_flows):
        super(VariationalInference, self).__init__()
        self.flows = nn.ModuleList([PlanarFlow(dim) for _ in range(num_flows)])

    def forward(self, z):
        log_det_sum = 0
        for flow in self.flows:
            z, log_det = flow(z)
            log_det_sum += log_det

        return z, log_det_sum

Now, we will set up the training loop to optimize the parameters of the normalizing flow using stochastic gradient descent:

# Set random seed for reproducibility
torch.manual_seed(42)

# Define model parameters
dim = 2
num_flows = 4
vi = VariationalInference(dim, num_flows)
optimizer = optim.Adam(vi.parameters(), lr=0.01)

# Training loop
num_epochs = 1000
for epoch in range(num_epochs):
    z = torch.randn(1000, dim)
    z_new, log_det = vi(z)
    target_prob = target_distribution(z_new)
    loss = -torch.mean(torch.log(target_prob) - log_det)

    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

    if epoch % 100 == 0:
        print(f'Epoch {epoch}/{num_epochs}, Loss: {loss.item()}')

Finally, we can generate samples from the learned distribution and plot the results to evaluate the performance of the model:

# Generate samples from the learned distribution
with torch.no_grad():
    z_sample = torch.randn(1000, dim)
    z_new, _ = vi(z_sample)
    samples = z_new.numpy()

# Plot samples and target distribution
plt.hist2d(samples[:, 0], samples[:, 1], bins=50, cmap='plasma')
x = np.linspace(-3, 3, 100)
y = np.linspace(-3, 3, 100)
X, Y = np.meshgrid(x, y)
Z = target_distribution(torch.Tensor(np.dstack((X, Y)))).numpy()
plt.contour(X, Y, Z, cmap='viridis', levels=5)
plt.colorbar()
plt.show()

In this tutorial, we have implemented a simple variational inference model with a normalizing flow using PyTorch. By training the model on a target distribution, we can learn an approximation that matches the target distribution. Variational inference with normalizing flows is a powerful technique with many applications in machine learning, such as generative modeling and density estimation.

0 0 votes
Article Rating

Leave a Reply

2 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
@zijingwu7547
1 day ago

Thanks for the video! It is very helpful!

@ananyapamde4514
1 day ago

Awesome!! Really useful

2
0
Would love your thoughts, please comment.x
()
x