PyTorch Basics – Part Nineteen: Logistic Regression Implementation
In this tutorial, we will be implementing logistic regression using PyTorch. Logistic regression is a linear model used for binary classification tasks.
Steps to Implement Logistic Regression:
- Import PyTorch libraries: import torch and torch.nn
- Define the logistic regression model class:
- Instantiate the model:
- Define the loss function and optimizer:
- Train the model:
- Evaluate the model:
class LogisticRegression(nn.Module): def __init__(self, input_size, num_classes): super(LogisticRegression, self).__init__() self.linear = nn.Linear(input_size, num_classes) def forward(self, x): out = self.linear(x) return out
model = LogisticRegression(input_size, num_classes)
criterion = nn.CrossEntropyLoss() optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
for epoch in range(num_epochs): # Forward pass outputs = model(inputs) loss = criterion(outputs, labels) # Backward pass and optimization optimizer.zero_grad() loss.backward() optimizer.step()
with torch.no_grad(): outputs = model(test_inputs) _, predicted = torch.max(outputs.data, 1) accuracy = (predicted == test_labels).sum().item() / test_labels.size(0) print('Accuracy: {}'.format(accuracy))
By following these steps, you can successfully implement logistic regression using PyTorch for binary classification tasks.