Linear Regression of PyTorch

One of PyTorch's in-depth learning framework libraries is an open source in-depth learning platform from Facebook that provides a seamless link between research prototypes and production deployments.
The purpose of this article is to introduce the basic part of PyTorch, to help beginners to realize the initial writing of python PyTorch code in 4 minutes.
Preparing for coding
Need to install Python package on the computer, import some scientific computing packages, such as numpy, and most importantly, don't forget to import PyTorch. The following results are obtained on jupyter notebook. Interested readers can download Anaconda by themselves, which comes with jupyter notebook. (Note: Anaconda supports python's multiple versions of virtual compilation environment. Jupyter notebook is a web-based compilation interface, which divides the code into cell s. It can see the running results in real time and is very convenient to use!)
Software configuration and installation, there are many online tutorials, here is no longer to elaborate, the paper will always feel shallow, know this matter to practice. Let's go straight into Pytorch and start coding!
Tensors
Tensor tensor type is an important basic data type in the framework of neural network. It can be simply understood as a multi-dimensional matrix containing a single data type element. Tensors are connected by operations to form a computational graph. www.jianguoyuncom
In the following code example, a 2*3 two-dimensional tensor x is created, specifying that the data type is Float:

import torch 
#Tensors 
x=torch.FloatTensor([[1,2,3],[4,5,6]]) 
print(x.size(),"\n",x) 

Operation results:
PyTorch contains many mathematical operations about tensors. In addition, it provides many utilities, such as efficient serialization of Tensor and other arbitrary data types, as well as other useful utilities.
Here's an example of Tensor's addition/subtraction, where torch.ones(sizes, out=None) Tensor returns a tensor of all 1 whose shape is defined by variable parameter sizes. In the example, the addition of variable x is the creation of two tensors of 23 with corresponding position value of 1, equivalent to the value of + 2 for each dimension of X. The code and operation results are as follows:

#Add tensors 
x.add_(torch.ones([2,3])+torch.ones([2,3])) 

Operation results:
Similarly, PyTorch supports subtraction operations. Examples are as follows. On the basis of the above results, subtract 2 from each dimension and restore x to the original value.

#Subtract Tensor 
x.sub_(torch.ones([2,3])*2) 

Operation results:
Other PyTorch readers can refer to the Chinese links given above.

PyTorch and NumPy

Users can easily switch back and forth between PyTorch and NumPy.
Below is a simple example of transforming np.matrix into PyTorch and changing dimensions into individual columns:

#Numpy to torch tensors 
import numpy as np 
y=np.matrix([[2,2],[2,2],[2,2]]) 
z=np.matrix([[2,2],[2,2],[2,2]],dtype="int16") 
x.short() @ torch.from_numpy(z) 

Operation results:
Where @ is the overloaded operator of tensor multiplication, x is the tensor of 23, the value is [[1,2,3], [4,5,6], multiplied by z converted into tensor, z is 32, and the result is 22 tensors. (Similar to matrix multiplication, if you don't understand the results of the operation, you can see the multiplication of matrices.)
In addition, PyTorch also supports reshape reconstruction of tensor structure. Here is an example of reconstructing tensor x into 16 one-dimensional tensors, similar to reshape in numpy.
#Reshape tensors(similar to np.reshape)
x.view(1,6)
Operation results:
CPU and GPUs
PyTorch allows variables to dynamically change devices using torch.cuda.device context manager. Here is the sample code:

#move variables and copies across computer devices 
x=torch.FloatTensor([[1,2,3],[4,5,6]]) 
y=np.matrix([[2,2,2],[2,2,2]],dtype="float32") 
if(torch.cuda.is_available()): 
    xx=x.cuda(); 
    y=torch.from_numpy(y).cuda() 
    z=x+y 
print(z) 
print(x.cpu()) 

Operation results:
PyTorch Variables
Variables are just a thin layer wrapped around Tensor, which supports almost all of the API s defined by Tensor, and variables are ingeniously defined as part of an automatic compilation package. It provides classes and functions that can automatically distinguish any scalar-valued function.
Following is a simple example of PyTorch variable usage. The result of multiplying v1 and v2 is assigned to v3, where the attribute of required_grad is defaulted to False. If a node requires_grad is set to True, then the required_grad of all nodes dependent on it is True, which is mainly used for gradient calculation.

#Variable(part of autograd package) 
#Variable (graph nodes) are thin wrappers around tensors and have dependency knowle 
#Variable enable backpropagation of gradients and automatic differentiations 
#Variable are set a 'volatile' flad during infrencing 
from torch.autograd import Variable 
v1 = Variable(torch.tensor([1.,2.,3.]), requires_grad=False) 
v2 = Variable(torch.tensor([4.,5.,6.]), requires_grad=True) 
v3 = v1*v2 
v3.data.numpy() 

Operation results:

#Variables remember what created them 
v3.grad_fn

Operation results:
Back Propagation
Backpropagation algorithm is used to calculate the loss gradient relative to input weights and deviations in order to update weights and ultimately reduce losses in the next optimization iteration. PyTorch is very intelligent in layering the definition of the inverse method for variables to perform backpropagation.
The following is a simple back-propagation method to calculate the difference in sin(x):

#Backpropagation with example of sin(x) 
x=Variable(torch.Tensor(np.array([0.,1.,1.5,2.])*np.pi),requires_grad=True) 
y=torch.sin(x) 
x.grad 
y.backward(torch.Tensor([1.,1.,1.,1])) 
#Check gradient is indeed cox(x) 
if( (x.grad.data.int().numpy()==torch.cos(x).data.int().numpy()).all() ): 
    print ("d(sin(x)/dx=cos(x))") 

Operation results:

SLR: Simple Linear Regression

Now that we know the basics, we can start to use PyTorch to solve a simple machine learning problem, simple linear regression. We will complete it in four simple steps:
Step 1:
In step 1, we create an artificial data set generated b y equation y = wx + b and inject random errors. See the following example:
#Simple Liner Regression

# Fit a line to the data. Y =w.x+b 
#Deterministic behavior 
np.random.seed(0) 
torch.manual_seed(0) 
#Step 1:Dataset 
w=2;b=3 
x=np.linspace(0,10,100) 
y=w*x+b+np.random.randn(100)*2 
xx=x.reshape(-1,1) 
yy=y.reshape(-1,1) 

Step 2:
In step 2, we use forward function to define a simple class LinearRegression Model, and torch.nn.Linear to define constructors to convert input data linearly:

#Step 2:Model 
class LinearRegressionModel(torch.nn.Module): 
    def __init__(self,in_dimn,out_dimn): 
        super(LinearRegressionModel,self).__init__() 
        self.model=torch.nn.Linear(in_dimn,out_dimn) 
    def forward(self,x): 
        y_pred=self.model(x); 
        return y_pred; 
model=LinearRegressionModel(in_dimn=1, out_dimn=1) 

Step 3:
Next step: MSELoss is used as cost function and SGD is used as optimizer to train the model.

#Step 3: Training 
cost=torch.nn.MSELoss() 
optimizer=torch.optim.SGD(model.parameters(),lr=0.01,momentum=0.9) 
inputs=Variable(torch.from_numpy(x.astype("float32"))) 
outputs=Variable(torch.from_numpy(y.astype("float32"))) 
for epoch in range(100): 
#3.1 forward pass: 
    y_pred=model(inputs) 
#3.2 compute loss 
    loss=cost(y_pred,outputs) 
#3.3 backward pass 
    optimizer.zero_grad(); 
    loss.backward() 
    optimizer.step() 
    if((epoch+1)%10==0): 
        print("epoch{},loss{}".format(epoch+1,loss.data)) 

Step 4:
Now that the training has been completed, let's examine our model intuitively:

#Step 4:Display model and confirm 
import matplotlib.pyplot as plt 
plt.figure(figsize=(4,4)) 
plt.title("Model and Dataset") 
plt.xlabel("X");plt.ylabel("Y") 
plt.grid() 
plt.plot(x,y,"ro",label="DataSet",marker="x",markersize=4) 
plt.plot(x,model.model.weight.item()*x+model.model.bias.item(),label="Regression Model") 
plt.legend();plt.show() 

Operation results:
Now that you have completed the programming of PyTorch's first linear regression example, you can refer to PyTorch's official document links to complete most coding applications for future readers who want to make a hundred feet effort.

Keywords: Python jupyter Anaconda network

Added by mikeym on Mon, 29 Jul 2019 09:34:19 +0300