Pyrtorch Learning Notes

I. Pytorch Installation

Install cuda and cudnn, such as cuda 10, cudnn 7.5

Download torch: https://pytorch.org/Select to download whl files for torch and torchvision

Install torch with pip install whl_dir and torchvision at the same time

 

Preliminary use of pytorch

# -*- coding:utf-8 -*-
__author__ = 'Leo.Z'

import torch
import time
# See torch Edition
print(torch.__version__)
# Definition matrix a and b,Random Value Filling
a = torch.randn(10000, 1000)
b = torch.randn(1000, 2000)
# Record start time
t0 = time.time()
# Computational matrix multiplication
c = torch.matmul(a, b)
# End time of recording
t1 = time.time()
# Print results and run time
print(a.device, t1 - t0, c.norm(2))   # There c.norm(2)It's computing. c Of L2 norm

# Use GPU equipment
device = torch.device('cuda')
# take ab move to GPU
a = a.to(device)
b = b.to(device)
# Running and recording the running time
t0 = time.time()
c = torch.matmul(a, b)
t1 = time.time()
# Print in GPU Up-running time
print(a.device, t1 - t0, c.norm(2))

# Run again to confirm runtime
t0 = time.time()
c = torch.matmul(a, b)
t1 = time.time()
print(a.device, t1 - t0, c.norm(2))

The results are as follows:

1.1.0
cpu 0.14660906791687012 tensor(141129.3906)
cuda:0 0.19049072265625 tensor(141533.1250, device='cuda:0')
cuda:0 0.006981372833251953 tensor(141533.1250, device='cuda:0')

We found that the two runs on the GPU took different time, and the first run time even exceeded the CPU run time, because the first run had the time overhead of initializing the GPU run environment.

 

3. Automatic derivation

# -*- coding:utf-8 -*-
__author__ = 'Leo.Z'

import torch

# Definition a b c x The value of the value of the ____________ abc Designated as needing derivation requires_grad=True
x = torch.tensor(2.)
a = torch.tensor(1., requires_grad=True)
b = torch.tensor(2., requires_grad=True)
c = torch.tensor(3., requires_grad=True)
# Definition y function
y = a * x ** 2 + b * x + c;
# Use autograd.grad Self-determined derivation
grads = torch.autograd.grad(y, [a, b, c])
# Printing abc Divided values (bring in) x Value of _____________
print('after', grads[0],grads[1],grads[2])

Keywords: Swift pip

Added by l3asturd on Wed, 31 Jul 2019 13:38:53 +0300