PyTorch constructs dynamic computational graphs that allow for maximum flexibility and speed for deep learning research. Dynamic graphs are useful when the computation cannot be fully determined ahead of time, as they allow the graph to change on each iteration based on variable data. This makes PyTorch well-suited for problems with dynamic or variable sized inputs. While static graphs can optimize computation, dynamic graphs are easier to debug and create extensions for. PyTorch aims to be a simple and intuitive platform for neural network programming and research.
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital Transformation
Pytorch for tf_developers
1. PyTorch for Tensor ow Developers
Overview
PyTorch constructs
Dynamic Graphs
--- Abdul Muneer
https://www.minds.ai/
2. Why do we use any Framework?
Model Prediction
Gradient computation ---- (automatic differentiation)
3. Why should we explore non TF frameworks?
Engineering is a key component in Deep Learning practice
What engineering problems are existing tools fails to solve?
Improves our understanding of TF
We do not end up being one trick pony
Helps understand n/w implementation in those frameworks.
5. What is PyTorch?
It’s a Python based scienti c computing package targeted at two sets of audiences:
A replacement for numpy to use the power of GPUs
a deep learning research platform that provides maximum exibility and speed
6. In [ ]: # MNIST example
import torch
import torch.nn as nn
from torch.autograd import Variable
class CNN(nn.Module):
def __init__(self):
super(CNN, self).__init__()
self.layer1 = nn.Sequential(
nn.Conv2d(1, 16, kernel_size=5, padding=2),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.MaxPool2d(2))
self.layer2 = nn.Sequential(
nn.Conv2d(16, 32, kernel_size=5, padding=2),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.MaxPool2d(2))
self.fc = nn.Linear(7*7*32, 10)
def forward(self, x):
out = self.layer1(x)
out = self.layer2(out)
out = out.view(out.size(0), -1)
out = self.fc(out)
return out
7. In [ ]: cnn = CNN()
# Loss and Optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(cnn.parameters(), lr=learning_rate)
# Train the Model
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
images = Variable(images)
labels = Variable(labels)
# Forward + Backward + Optimize
optimizer.zero_grad()
outputs = cnn(images)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
8. PyTorch is imperative
In [1]:
In [2]:
import torch
x = torch.Tensor(5, 3)
x
Out[2]: 0.0000e+00 -8.5899e+09 0.0000e+00
-8.5899e+09 6.6449e-33 1.9432e-19
4.8613e+30 5.0832e+31 7.5338e+28
4.5925e+24 1.7448e+22 1.1429e+33
4.6114e+24 2.8031e+20 1.2410e+28
[torch.FloatTensor of size 5x3]
9. PyTorch is imperative
No need for placeholders, everything is a tensor.
Debug it with a regular python debugger.
You can go almost as high level as keras and as low level as pure Tensor ow.
11. Tensors
similar to numpy’s ndarrays
can also be used on a GPU to accelerate computing.
In [2]: import torch
x = torch.Tensor(5, 3)
print(x)
0.0000 0.0000 0.0000
-2.0005 0.0000 0.0000
0.0000 0.0000 0.0000
0.0000 0.0000 0.0000
0.0000 0.0000 0.0000
[torch.FloatTensor of size 5x3]
12. Construct a randomly initialized matrix
In [3]:
In [4]:
x = torch.rand(5, 3)
print(x)
x.size()
0.6543 0.1334 0.1410
0.6995 0.5005 0.6566
0.2181 0.1329 0.7526
0.6533 0.6995 0.6978
0.7876 0.7880 0.9808
[torch.FloatTensor of size 5x3]
Out[4]: torch.Size([5, 3])
17. Numpy Bridge
The torch Tensor and numpy array will share their underlying memory locations,
Changing one will change the other.
In [6]:
In [7]:
a = torch.ones(3)
print(a)
b = a.numpy()
print(b)
1
1
1
[torch.FloatTensor of size 3]
[ 1. 1. 1.]
19. Converting numpy Array to torch Tensor
In [13]:
In [16]:
In [17]:
import numpy as np
a = np.ones(5)
b = torch.from_numpy(a)
np.add(a, 1, out=a)
print(a)
print(b)
Out[16]: array([ 4., 4., 4., 4., 4.])
[ 4. 4. 4. 4. 4.]
4
4
4
4
4
[torch.DoubleTensor of size 5]
20. Autograd: automatic di erentiation
autograd package is central to all neural networks in PyTorch.
21. Variable
The central class of the autograd package
data
the raw tensor
grad
the gradient w.r.t. this variable
creator
creator of this Variable in the graph
22. Function
Function is another class which is very important for autograd implementation (think
operations in TF)
Variable and Function are interconnected and build up an acyclic graph
The graph encodes a complete history of computation.
33. In [32]: # Do more operations on y
z = y * y * 3
out = z.mean()
print(z, out)
Variable containing:
27 27
27 27
[torch.FloatTensor of size 2x2]
Variable containing:
27
[torch.FloatTensor of size 1]
34. Gradients
gradients computed automatically upon invoking the .backward method
In [33]: out.backward()
print(x.grad)
Variable containing:
4.5000 4.5000
4.5000 4.5000
[torch.FloatTensor of size 2x2]
35.
36. Updating Weights
weight = weight - learning_rate * gradient
In [ ]: learning_rate = 0.01
# The learnable parameters of a model are returned by net.parameters()
for f in net.parameters():
f.data.sub_(f.grad.data * learning_rate) # weight = weight - learning_rate * g
radient
37. Use Optimizers instead of updating weights by hand.
In [ ]: import torch.optim as optim
# create your optimizer
optimizer = optim.SGD(net.parameters(), lr=0.01)
for i in range(num_epochs):
# in your training loop:
optimizer.zero_grad() # zero the gradient buffers
output = net(input)
loss = criterion(output, target)
loss.backward()
optimizer.step()
46. DL frameworks usually consists two “interpreters” in the framework.
1. The host language (i.e. Python)
2. The computational graph.
i.e. , a language that sets up the computational graph
and
an execution mechanism that is different from the host language.
47. Static computational graphs can optimize computation.
Dynamic computational graphs are valuable when you cannot
determine the computation.
e.g. recursive computations that are based on variable data.
50. Case against dynamic graphs
Dynamic capabilities can be added to a static computation graph.
51. .. probably not a natural t that your head will appreciate.
exhibit A: tf.while_loop
exhibit B: A whole new library called tensorflow fold
52. Problems of achieving same result with static graphs
Dif culty in expressing complex ow-control logic
look very different in the graph than in the imperative coding style of the
host language
requires sophistication on the developer’s part.
Complexity of the computation graph implementation
Forced to address all possible cases.
Reduces opportunity for optimization
53. Case FOR dynamic graphs
Suits well for dynamic data
Any kind of additional convenience will help speed up in
your explorations
it works just like Python
** no split-brain experience that there’s another execution engine that running the
computation.
Easier to debug
Easier to create unique extensions.
54. Use cases of Dynamic Graphs
Variably sized inputs
Variably structured inputs
Nontrivial inference algorithms
Variably structured outputs
55. Why Dynamic Computation Graphs are awesome
Deep Learning architectures will traverse the same evolutionary path as
traditional computation.
monolithic stand-alone programs, to more modular programs
In the old days we had monolithic DL systems with single analytic objective
functions.
With dynamic graphs, systems can have multiple networks competing/coperating.
Richer modularity. Similar to Information Encapsulation in OOP
56. Future Prospects
I predict it will coexist with TF
sort of like Angular vs React in JS world, with pytorch similar to React
sort of like java vs python, with pytorch similar to python.
Increased developer adoption
Better supports for visualization and input management tools