Creating a Basic Neural Network with PyTorch: Exploring Deep Learning| packtpub.com

Posted by


Deep learning has revolutionized the field of artificial intelligence and machine learning, allowing us to build complex models that can learn from data and make intelligent decisions. PyTorch is a popular deep learning framework that provides a flexible and powerful platform for building and training neural networks. In this tutorial, we will walk through the process of building a simple neural network using PyTorch.

Before we begin, make sure you have PyTorch installed. You can install PyTorch using pip by running the following command:

pip install torch

Once you have PyTorch installed, let’s start by importing the necessary libraries:

import torch
import torch.nn as nn
import torch.optim as optim

Next, we will define our neural network model. For this tutorial, we will build a simple feedforward neural network with one hidden layer. Our neural network will have three input features, one hidden layer with five neurons, and one output neuron. Here’s the code to define the neural network:

class SimpleNN(nn.Module):
    def __init__(self):
        super(SimpleNN, self).__init__()
        self.fc1 = nn.Linear(3, 5)
        self.fc2 = nn.Linear(5, 1)

    def forward(self, x):
        x = torch.relu(self.fc1(x))
        x = self.fc2(x)
        return x

In the code above, we define a class SimpleNN that inherits from nn.Module, which is a base class for all PyTorch neural network models. Inside the __init__ method, we define two fully connected layers (nn.Linear) with the specified input and output sizes. In the forward method, we define the forward pass of the network, which computes the output of the network given an input tensor x.

Now that we have defined our neural network model, let’s create an instance of the model and define our loss function and optimizer:

model = SimpleNN()
criterion = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=0.01)

In the code above, we create an instance of our SimpleNN model, define the mean squared error loss function (nn.MSELoss), and create a stochastic gradient descent optimizer (optim.SGD) to update the parameters of the model.

Next, let’s generate some random input data and train our neural network:

# Generate random input data
X = torch.randn(10, 3)
y = torch.randn(10, 1)

# Train the neural network
for epoch in range(100):
    optimizer.zero_grad()
    output = model(X)
    loss = criterion(output, y)
    loss.backward()
    optimizer.step()
    print(f'Epoch {epoch+1}/{100}, Loss: {loss.item()}')

In the code above, we generate random input data X and random target labels y. We then run a training loop for 100 epochs, where we calculate the output of the model, compute the loss between the predicted output and the target labels, backpropagate the gradients, and update the model parameters using the optimizer.

After training the model, you can make predictions on new data by passing it through the trained model:

# Make predictions
new_data = torch.randn(1, 3)
prediction = model(new_data)
print('Prediction:', prediction.item())

And that’s it! You have just built and trained a simple neural network using PyTorch. This tutorial covers the basics of building a neural network in PyTorch, but there are many more advanced techniques and models that you can explore. I recommend checking out the official PyTorch documentation and tutorials for more in-depth learning resources. Happy coding!

0 0 votes
Article Rating

Leave a Reply

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x