Training Multiple Output
author: Juma Shafara date: "2024-08-09" title: Linear Regression Multiple Outputs keywords: [Training Two Parameter, Mini-Batch Gradient Decent, Training Two Parameter Mini-Batch Gradient Decent] description: In this lab, you will create a model the Pytroch way. This will help you as models get more complicated.

Objective
- How to create a complicated models using pytorch build in functions.
Table of Contents
In this lab, you will create a model the Pytroch way. This will help you as models get more complicated.
- Make Some Data
- Create the Model and Cost Function the Pytorch way
- Train the Model: Batch Gradient Descent
- Practice Questions
Estimated Time Needed: 20 min
Import the following libraries:
Set the random seed:
from torch.utils.data import Dataset, DataLoader
class Data(Dataset):
def __init__(self):
self.x=torch.zeros(20,2)
self.x[:,0]=torch.arange(-1,1,0.1)
self.x[:,1]=torch.arange(-1,1,0.1)
self.w=torch.tensor([ [1.0,-1.0],[1.0,3.0]])
self.b=torch.tensor([[1.0,-1.0]])
self.f=torch.mm(self.x,self.w)+self.b
self.y=self.f+0.001*torch.randn((self.x.shape[0],1))
self.len=self.x.shape[0]
def __getitem__(self,index):
return self.x[index],self.y[index]
def __len__(self):
return self.len
create a dataset object
Create a custom module:
Create an optimizer object and set the learning rate to 0.1. Don't forget to enter the model parameters in the constructor.
Create an optimizer object and set the learning rate to 0.1. Don't forget to enter the model parameters in the constructor.

Create the criterion function that calculates the total loss or cost:
Create a data loader object and set the batch_size to 5:
Run 100 epochs of Mini-Batch Gradient Descent and store the total loss or cost for every iteration. Remember that this is an approximation of the true total loss or cost.
LOSS=[]
epochs=100
for epoch in range(epochs):
for x,y in train_loader:
#make a prediction
yhat=model(x)
#calculate the loss
loss=criterion(yhat,y)
#store loss/cost
LOSS.append(loss.item())
#clear gradient
optimizer.zero_grad()
#Backward pass: compute gradient of the loss with respect to all the learnable parameters
loss.backward()
#the step function on an Optimizer makes an update to its parameters
optimizer.step()
Plot the cost:
About the Author:
Hi, My name is Juma Shafara. Am a Data Scientist and Instructor at DATAIDEA. I have taught hundreds of peope Programming, Data Analysis and Machine Learning.
I also enjoy developing innovative algorithms and models that can drive insights and value.
I regularly share some content that I find useful throughout my learning/teaching journey to simplify concepts in Machine Learning, Mathematics, Programming, and related topics on my website jumashafara.dataidea.org.
Besides these technical stuff, I enjoy watching soccer, movies and reading mystery books.