Friday, April 19, 2024
No menu items!
HomeArtificial Intelligence and Machine LearningMaking Linear Predictions in PyTorch

Making Linear Predictions in PyTorch



Last Updated on November 28, 2022

Linear regression is a statistical technique for estimating the relationship between two variables. A simple example of linear regression is to predict the height of someone based on the square root of the person’s weight (that’s what BMI is based on). To do this, we need to find the slope and intercept of the line. The slope is how much one variable changes with the change in other variable by one unit. The intercept is where our line crosses with the $y$-axis.

Let’s use the simple linear equation $y=wx+b$ as an example. The output variable is $y$, while the input variable is $x$. The slope and $y$-intercept of the equation are represented by the letters $w$ and $b$, hence referring them as the equation’s parameters. Knowing these parameters allows you to forecast the outcome $y$ for any given value of $x$.

Now that you have learnt some basics of the simple linear regression, let’s try to implement this useful algorithm in the PyTorch framework. Here, we’ll focus on a few points described as follows:

What is Linear Regression and how it can be implemented in PyTorch.
How to import linear class in PyTorch and use it for making predictions.
How we can build custom module for a linear regression problem, or for more complex models in the future.

So let’s get started.

Making Linear Predictions in PyTorch.
Picture by Daryan Shamkhali. Some rights reserved.

Overview

This tutorial is in three parts; they are

Preparing Tensors
Using Linear Class from PyTorch
Building a Custom Linear Class

Preparing Tensors

Note that in this tutorial we’ll be covering one-dimensional linear regression having only two parameters. We’ll create this linear expression:

$$y=3x+1$$

We’ll define the parameters $w$ and $b$ as tensors in PyTorch. We set the requires_grad parameter to True, indicating that our model has to learn these parameters:

import torch

# defining the parameters ‘w’ and ‘b’
w = torch.tensor(3.0, requires_grad = True)
b = torch.tensor(1.0, requires_grad = True)

In PyTorch prediction step is called forward step. So, we’ll write a function that allows us to make predictions for $y$ at any given value of $x$.

# function of the linear equation for making predictions
def forward(x):
y_pred = w * x + b
return y_pred

Now that we have defined the function for linear regression, let’s make a prediction at $x=2$.

# let’s predict y_pred at x = 2
x = torch.tensor([[2.0]])
y_pred = forward(x)
print(“prediction of y at ‘x = 2’ is: “, y_pred)

This prints

prediction of y at ‘x = 2’ is: tensor([[7.]], grad_fn=<AddBackward0>)

Let’s also evaluate the equation with multiple inputs of $x$.

# making predictions at multiple values of x
x = torch.tensor([[3.0], [4.0]])
y_pred = forward(x)
print(“prediction of y at ‘x = 3 & 4’ is: “, y_pred)

This prints

prediction of y at ‘x = 3 & 4’ is: tensor([[10.],
[13.]], grad_fn=<AddBackward0>)

As you can see, the function for linear equation successfully predicted outcome for multiple values of $x$.

In summary, this is the complete code

import torch

# defining the parameters ‘w’ and ‘b’
w = torch.tensor(3.0, requires_grad = True)
b = torch.tensor(1.0, requires_grad = True)

# function of the linear equation for making predictions
def forward(x):
y_pred = w * x + b
return y_pred

# let’s predict y_pred at x = 2
x = torch.tensor([[2.0]])
y_pred = forward(x)
print(“prediction of y at ‘x = 2’ is: “, y_pred)

# making predictions at multiple values of x
x = torch.tensor([[3.0], [4.0]])
y_pred = forward(x)
print(“prediction of y at ‘x = 3 & 4’ is: “, y_pred)

Using Linear Class from PyTorch

In order to solve real-world problems, you’ll have to build more complex models and, for that, PyTorch brings along a lot of useful packages including the linear class that allows us to make predictions. Here is how we can import linear class module from PyTorch. We’ll also randomly initialize the parameters.

from torch.nn import Linear
torch.manual_seed(42)

Note that previously we defined the values of $w$ and $b$ but in practice they are randomly initialized before we start the machine learning algorithm.

Let’s create a linear object model and use the parameters() method to access the parameters ($w$ and $b$) of the model. The Linear class is initialized with the following parameters:

in_features: reflects the size of each input sample
out_features: reflects the size of each output sample

linear_regression = Linear(in_features=1, out_features=1)
print(“displaying parameters w and b: “,
list(linear_regression.parameters()))

This prints

displaying parameters w and b: [Parameter containing:
tensor([[0.5153]], requires_grad=True), Parameter containing:
tensor([-0.4414], requires_grad=True)]

Likewise, you can use state_dict() method to get the dictionary containing the parameters.

print(“getting python dictionary: “,linear_regression.state_dict())
print(“dictionary keys: “,linear_regression.state_dict().keys())
print(“dictionary values: “,linear_regression.state_dict().values())

This prints

getting python dictionary: OrderedDict([(‘weight’, tensor([[0.5153]])), (‘bias’, tensor([-0.4414]))])
dictionary keys: odict_keys([‘weight’, ‘bias’])
dictionary values: odict_values([tensor([[0.5153]]), tensor([-0.4414])])

Now we can repeat what we did before. Let’s make a prediction using a single value of $x$.

# make predictions at x = 2
x = torch.tensor([[2.0]])
y_pred = linear_regression(x)
print(“getting the prediction for x: “, y_pred)

This gives

getting the prediction for x: tensor([[0.5891]], grad_fn=<AddmmBackward0>)

which corresponds to $0.5153times 2 – 0.4414 = 0.5891$. Similarly, we’ll make predictions for multiple values of $x$.

# making predictions at multiple values of x
x = torch.tensor([[3.0], [4.0]])
y_pred = linear_regression(x)
print(“prediction of y at ‘x = 3 & 4’ is: “, y_pred)

This prints

prediction of y at ‘x = 3 & 4’ is: tensor([[1.1044],
[1.6197]], grad_fn=<AddmmBackward0>)

Put everything together, the complete code is as follows

import torch
from torch.nn import Linear

torch.manual_seed(1)

linear_regression = Linear(in_features=1, out_features=1)
print(“displaying parameters w and b: “, list(linear_regression.parameters()))
print(“getting python dictionary: “,linear_regression.state_dict())
print(“dictionary keys: “,linear_regression.state_dict().keys())
print(“dictionary values: “,linear_regression.state_dict().values())

# make predictions at x = 2
x = torch.tensor([[2.0]])
y_pred = linear_regression(x)
print(“getting the prediction for x: “, y_pred)

# making predictions at multiple values of x
x = torch.tensor([[3.0], [4.0]])
y_pred = linear_regression(x)
print(“prediction of y at ‘x = 3 & 4’ is: “, y_pred)

Building a Custom Linear Class

PyTorch offers the possibility to build custom linear class. For later tutorials, we’ll be using this method for building more complex models. Let’s start by importing the nn module from PyTorch in order to build a custom linear class.

from torch import nn

Custom modules in PyTorch are classes derived from nn.Module. We’ll build a class for simple linear regression and name it as Linear_Regression. This should make it a child class of the nn.Module. Consequently, all the methods and attributes will be inherited into this class. In the object constructor, we’ll declare the input and output parameters. Also, we create a super constructor to call linear class from the nn.Module. Lastly, in order to generate prediction from the input samples, we’ll define a forward function in the class.

class Linear_Regression(nn.Module):
def __init__(self, input_sample, output_sample):
# Inheriting properties from the parent calss
super(Linear_Regression, self).__init__()
self.linear = nn.Linear(input_sample, output_sample)

# define function to make predictions
def forward(self, x):
output = self.linear(x)
return output

Now, let’s create a simple linear regression model. It will simply be an equation of line in this case. For sanity check, let’s also print out the model parameters.

model = Linear_Regression(input_sample=1, output_sample=1)
print(“printing the model parameters: “, list(model.parameters()))

This prints

printing the model parameters: [Parameter containing:
tensor([[-0.1939]], requires_grad=True), Parameter containing:
tensor([0.4694], requires_grad=True)]

As we did in the earlier sessions of the tutorial, we’ll evaluate our custom linear regression model and try to make predictions for single and multiple values of $x$ as input.

x = torch.tensor([[2.0]])
y_pred = model(x)
print(“getting the prediction for x: “, y_pred)

This prints

getting the prediction for x: tensor([[0.0816]], grad_fn=<AddmmBackward0>)

which corresponds to $-0.1939*2+0.4694=0.0816$. As you can see, our model has been able to predict the outcome and the result is a tensor object. Similarly, let’s try to get predictions for multiple values of $x$.

x = torch.tensor([[3.0], [4.0]])
y_pred = model(x)
print(“prediction of y at ‘x = 3 & 4’ is: “, y_pred)

This prints

prediction of y at ‘x = 3 & 4’ is: tensor([[-0.1122],
[-0.3061]], grad_fn=<AddmmBackward0>)

So, the model also works well for multiple values of $x$.

Putting everything together, the following is the complete code

import torch
from torch import nn

torch.manual_seed(42)

class Linear_Regression(nn.Module):
def __init__(self, input_sample, output_sample):
# Inheriting properties from the parent calss
super(Linear_Regression, self).__init__()
self.linear = nn.Linear(input_sample, output_sample)

# define function to make predictions
def forward(self, x):
output = self.linear(x)
return output

model = Linear_Regression(input_sample=1, output_sample=1)
print(“printing the model parameters: “, list(model.parameters()))

x = torch.tensor([[2.0]])
y_pred = model(x)
print(“getting the prediction for x: “, y_pred)

x = torch.tensor([[3.0], [4.0]])
y_pred = model(x)
print(“prediction of y at ‘x = 3 & 4’ is: “, y_pred)

Summary

In this tutorial we discussed how we can build neural networks from scratch, starting off with a simple linear regression model. We have explored multiple ways of implementing simple linear regression in PyTorch. In particular, we learned:

What is Linear Regression and how it can be implemented in PyTorch.
How to import linear class in PyTorch and use it for making predictions.
How we can build custom module for a linear regression problem, or for more complex models in the future.



The post Making Linear Predictions in PyTorch appeared first on MachineLearningMastery.com.

Read MoreMachineLearningMastery.com

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments