graph TB
subgraph Pytorch Introduction
node4( Linear Regression)
click node4 "#linear-regression"
end
subgraph tried but got nothing to say about this.
end
subgraph declaring data
end
subgraph wrapping up the data in variables
end
subgraph Variable
end
subgraph Tensor
end
subgraph variable with values
end
subgraph One in and one out
end
subgraph our model
end
subgraph Construct our loss function and an Optimizer. The call to model.parameters()
end
subgraph in the SGD constructor will contain the learnable parameters of the two
end
subgraph nn.Linear modules which are members of the model.
end
subgraph Training loop
end
subgraph Forward pass: Compute predicted y by passing x to the model
end
subgraph Compute and print loss
end
subgraph Zero gradients, perform a backward pass, and update the weights.
end
subgraph After training
node4( Linear Regression) --> node23( Below Pytorch)
click node23 "#below-pytorch"
end
subgraph Rank 1 view of the second row of a
end
subgraph Rank 2 view of the second row of a
end
subgraph Prints "[5 6 7 8] (4,)"
end
subgraph Prints "[[5 6 7 8]] (1, 4)"
end
subgraph We can make the same distinction when accessing columns of an array:
end
subgraph Prints "[ 2 6 10] (3,)"
end
subgraph Prints "[[ 2]
end
subgraph Compute sum of all elements; prints "10"
end
subgraph Compute sum of each column; prints "[4 6]"
end
subgraph v has shape (3,)
end
Pytorch Introduction
A starter tutorial on pytorch. Aims at introducing pytorch through examples namely by implementing
1) Linear Regression
2) Logistic Regression
3) Simple Neural Network
Before going into implementations let’s understand the basic unit of pytorch i.e a Tensor.
A tensor is a simple n dimensional array similar to numpy but can run on GPU’s. Hence computation can be spedup.
Another important feature that pytorch provides is autograd. i.e automatic differentiation wrt variables, this helps us obtain the gradient during the backpropagation step in any neural network
Now that we know the basic inners of pytorch. Let’s try and implement the most basic ML algo.
Linear Regression
The autograd feature we mentioned previously cannot be directly used with pytorch tensors. We need to wrap these tensors in Variables to use the feature.
In the computational graph we are going to build, this variable object is going to be one of the nodes.
If w is a Variable then w.data is a Tensor, and w.grad is another Variable holding the gradient of x with respect to some scalar value (usually the loss function).