Visualizing Neural Network Learning in Python with Numpy, PyTorch, and Deep Learning

Posted by


Neural network learning visualization plays a crucial role in understanding and interpreting the behavior and performance of a neural network during the training process. By visualizing the learning process, we can gain insights into how the network is learning, which can help us improve its performance and troubleshoot any issues that may arise.

In this tutorial, we will cover how to visualize neural network learning using Python, specifically focusing on the popular deep learning libraries Numpy and PyTorch. We will walk through the process of creating and training a simple neural network using PyTorch, and then visualize its learning using various techniques like loss curves, accuracy plots, and activation visualization.

  1. Setting up the environment:
    Before we start, make sure you have Python installed on your machine, along with the Numpy and PyTorch libraries. You can install them using pip:
pip install numpy
pip install torch
  1. Creating a simple neural network:
    We will start by creating a simple neural network using PyTorch. Here is an example of a 3-layer fully connected network with one input layer, one hidden layer, and one output layer:
import torch
import torch.nn as nn

class SimpleNN(nn.Module):
    def __init__(self, input_size, hidden_size, num_classes):
        super(SimpleNN, self).__init__()
        self.fc1 = nn.Linear(input_size, hidden_size)
        self.relu = nn.ReLU()
        self.fc2 = nn.Linear(hidden_size, num_classes)

    def forward(self, x):
        out = self.fc1(x)
        out = self.relu(out)
        out = self.fc2(out)
        return out
  1. Training the neural network:
    Next, we will train the neural network using a simple dataset. For this tutorial, we will use the popular MNIST dataset, which consists of grayscale images of handwritten digits (0-9). Here is an example of training the neural network on the MNIST dataset:
import torch.optim as optim
import torchvision
import torchvision.transforms as transforms

# Load the MNIST dataset
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))])
trainset = torchvision.datasets.MNIST(root='./data', train=True, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=32, shuffle=True)

# Initialize the neural network and optimizer
model = SimpleNN(28*28, 128, 10)
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.001)

# Train the neural network
num_epochs = 5
for epoch in range(num_epochs):
    for i, data in enumerate(trainloader, 0):
        inputs, labels = data
        inputs = inputs.view(-1, 28*28)

        optimizer.zero_grad()
        outputs = model(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()
  1. Visualizing the learning process:
    Now that we have trained the neural network, we can visualize its learning process using various techniques. One common way is to plot the loss curve, which shows how the loss decreases over the training iterations. Here is an example of plotting the loss curve for our simple neural network:
import matplotlib.pyplot as plt

losses = []
for epoch in range(num_epochs):
    running_loss = 0.0
    for i, data in enumerate(trainloader, 0):
        inputs, labels = data
        inputs = inputs.view(-1, 28*28)

        optimizer.zero_grad()
        outputs = model(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()

        running_loss += loss.item()
    losses.append(running_loss / len(trainloader))

plt.plot(losses)
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.title('Training Loss Curve')
plt.show()

In addition to the loss curve, we can also visualize the accuracy of the neural network on the training data over time. Here is an example of plotting the accuracy curve:

correct = 0
total = 0
with torch.no_grad():
    for data in trainloader:
        images, labels = data
        images = images.view(-1, 28*28)
        outputs = model(images)
        _, predicted = torch.max(outputs.data, 1)
        total += labels.size(0)
        correct += (predicted == labels).sum().item()

accuracy = correct / total
print('Accuracy on training data: %.2f %%' % (100 * accuracy))

Finally, we can visualize the activations of the hidden layers of the neural network to gain insights into how the network is learning. Here is an example of visualizing the activations of the hidden layer:

def get_activation(model, input, layer):
    activations = []
    def hook(module, input, output):
        activations.append(output)
    hook_handle = layer.register_forward_hook(hook)
    model(input)
    hook_handle.remove()
    return activations[0]

input = torch.randn(1, 28*28)
activations = get_activation(model, input, model.fc1)
plt.imshow(activations[0].detach().numpy(), cmap='hot', interpolation='nearest')
plt.title('Activations of Hidden Layer')
plt.show()

By visualizing the learning process of a neural network, we can gain valuable insights into how the network is learning and performing. This can help us optimize the network architecture, hyperparameters, and training process to improve its performance on various tasks. I hope this tutorial has provided you with a comprehensive guide on how to visualize neural network learning using Python, Numpy, and PyTorch. Happy learning!

0 0 votes
Article Rating
1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
@Dudeman1729
1 month ago

What’s the benefit of doing it scratch with numpy?