Implementing Loss Functions in PyTorch: Day 3

Posted by

Day 3: Loss Functions with PyTorch Code Implementation

body {
font-family: Arial, sans-serif;
margin: 20px;
}
h1 {
color: #333333;
}
p {
color: #666666;
}
code {
font-family: monospace;
background-color: #f2f2f2;
padding: 2px 5px;
border-radius: 4px;
}

Day 3: Loss Functions with PyTorch Code Implementation

Today, we will delve into the concept of loss functions in machine learning and how they are implemented in PyTorch. Loss functions are used to quantify how well a model is performing by comparing its predictions to the actual target values. In PyTorch, loss functions are defined in the torch.nn module.

Mean Squared Error Loss

Mean Squared Error (MSE) is a common loss function used for regression problems. It calculates the average of the squared differences between the predicted values and the true values. Here’s an example of how to implement MSE loss in PyTorch:


import torch
import torch.nn as nn

predicted_values = torch.randn(3, requires_grad=True)
true_values = torch.randn(3)

loss_fn = nn.MSELoss()
loss = loss_fn(predicted_values, true_values)

print(loss)

Cross Entropy Loss

Cross Entropy Loss is commonly used for classification problems, especially when dealing with multiple classes. It measures the difference between the predicted probability distribution and the true probability distribution. Here’s an example of implementing Cross Entropy Loss in PyTorch:


import torch
import torch.nn as nn

predicted_probs = torch.rand(3, 5, requires_grad=True)
true_labels = torch.randint(0, 5, (3,))

loss_fn = nn.CrossEntropyLoss()
loss = loss_fn(predicted_probs, true_labels)

print(loss)

These are just a few examples of the many loss functions available in PyTorch. By understanding and implementing these loss functions, you can effectively train your machine learning models and improve their performance.

0 0 votes
Article Rating
2 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
@gopiranjanagrawal4756
3 months ago

Explanation is very clear and precise. Great learning

@shradhaagarwal812
3 months ago

KL Divergence:
True distribution (( P )) which is usually a one-hot encoded vector. Ex: [1 0 0]
Predicted distribution (( Q )) which is a vector of probabilities across all classes. Ex: [0.7, 0.2, 0.1]