SlideShare ist ein Scribd-Unternehmen logo
1 von 44
Downloaden Sie, um offline zu lesen
Machine Duping
Pwning Deep Learning Systems
CLARENCE CHIO
MLHACKER
@cchio
adapted from
Dave Conway
HACKING
SKILLS
MATH & STATS
SECURITY
RESEARCHER
HAX0R
DATA
SCIENTIST
DOING IT
ImageNet
Google Inc.
Google DeepMind
UMass, Facial Recognition
Nvidia DRIVE
Nvidia
Invincea Malware Detection System
Deep Learning
why would someone choose to use it?
(Semi)Automated
feature/representation
learning
One infrastructure for
multiple problems (sort of)
Hierarchical learning:
Divide task across
multiple layers
Efficient,
easily distributed &
parallelized
Definitely not one-size-fits-all
softmaxfunction
[0.34, 0.57, 0.09]
predicted class: 1
correct class: 0
np.argmax
logits prediction
𝒇
hidden
layer
input
layer
output
layer
𝑊
activation
function
bias units
4-5-3 Neural Net Architecture
[17.0, 28.5, 4.50]
𝒇
𝒇
𝒇
𝒇
𝒇
𝒇
𝒇
𝒇
𝒇
𝒇
𝒇
softmaxfunction
[0.34, 0.57, 0.09]
predicted class: 1
correct class: 0
np.argmax
logits prediction
𝒇
hidden
layer
input
layer
output
layer
𝑊
activation
function
bias units
4-5-3 Neural Net Architecture
[17.0, 28.5, 4.50]
𝒇
𝒇
Training Deep Neural Networks
Step 1 of 2: Feed Forward
1. Each unit receives output of the neurons in the previous layer (+ bias
signal)
2. Computes weighted average of inputs
3. Apply weighted average through nonlinear activation function
4. For DNN classifier, send final layer output through softmax function
softmax
function
[0.34, 0.57, 0.09]logits
prediction
𝒇
𝑊1
activation
function
[17.0, 28.5, 4.50]
𝒇 𝒇
activation
function
activation
function
input 𝑊2
b1 b2
bias
unit
bias
unit
Training Deep Neural Networks
Step 2 of 2: Backpropagation
1. If the model made a wrong prediction, calculate the error
1. In this case, the correct class is 0, but the model predicted 1 with 57% confidence - error is thus 0.57
2. Assign blame: trace backwards to find the units that contributed to this wrong
prediction (and how much they contributed to the total error)
1. Partial differentiation of this error w.r.t. the unit’s activation value
3. Penalize those units by decreasing their weights and biases by an amount proportional
to their error contribution
4. Do the above efficiently with optimization algorithm e.g. Stochastic Gradient Descent
0.57
total error
𝒇
𝑊1
activation
function
28.5
𝒇 𝒇
activation
function
activation
function
𝑊2
b1 b2
bias
unit
bias
unit
DEMO
Beyond Multi Layer Perceptrons
Source: iOS Developer Library
vImage Programming Guide
Convolutional Neural Network
Layered Learning
Lee, 2009, “Convolutional deep belief networks for scalable
unsupervised learning of hierarchical representations."
Beyond Multi Layer Perceptrons
Source: LeNet 5, LeCun et al.
Convolutional Neural Network
Beyond Multi Layer Perceptrons
Recurrent Neural Network
neural
network
input output
loop(s)
• Just a DNN with a feedback loop
• Previous time step feeds all
intermediate and final values into
next time step
• Introduces the concept of “memory”
to neural networks
Beyond Multi Layer Perceptrons
Recurrent Neural Network
input
output
time steps
network
depth
Y O L O !
Beyond Multi Layer Perceptrons
Disney, Finding Dory
Long Short-Term Memory (LSTM) RNN
• To make good predictions, we sometimes
need more context
• We need long-term memory capabilities
without extending the network’s recursion
indefinitely (unscalable)
Colah’s Blog, “Understanding LSTM Networks"
Deng et al. “Deep Learning: Methods and Applications”
HOW TO PWN?
Attack Taxonomy
ExploratoryCausative
(Manipulative test samples)
Applicable also to online-learning
models that continuously learn
from real-time test data
(Manipulative training samples)
Targeted
Indiscriminate
Training samples that move
classifier decision boundary in an
intentional direction
Training samples that increase FP/FN
→ renders classifier unusable
Adversarial input crafted to cause
an intentional misclassification
n/a
Why can we do this?
vs.
Statistical learning models don’t learn concepts the same way that we do.
BLINDSPOTS:
Adversarial Deep Learning
Intuitions
1. Run input x through the classifier model (or substitute/approximate model)
2. Based on model prediction, derive a perturbation tensor that
maximizes chances of misclassification:
1. Traverse the manifold to find blind spots in input space; or
2. Linear perturbation in direction of neural network’s cost function gradient; or
3. Select only input dimensions with high saliency* to perturb by the model’s Jacobian matrix
3. Scale the perturbation tensor by some magnitude, resulting in the
effective perturbation (δx) to x
1. Larger perturbation == higher probability for misclassification
2. Smaller perturbation == less likely for human detection
* saliency: amount of influence a selected dimension has on the entire model’s output
Adversarial Deep Learning
Intuitions
Szegedy, 2013: Traverse the manifold to find blind spots in the input space
• Adversarial samples == pockets in the manifold
• Difficult to efficiently find by brute force (high dimensional input)
• Optimize this search, take gradient of input w.r.t. target output class
Adversarial Deep Learning
Intuitions
Goodfellow, 2015: Linear adversarial perturbation
• Developed a linear view of adversarial examples
• Can just take the cost function gradient w.r.t. the sample (x) and original
predicted class (y)
• Easily found by backpropagation
Adversarial Deep Learning
Intuitions
Papernot, 2015: Saliency map + Jacobian matrix perturbation
• More complex derivations for why the Jacobian of the learned neural network
function is used
• Obtained with respect to input features rather than network parameters
• Forward propagation is used instead of backpropagation
• To reduce probability of human detection, only perturb the dimensions that
have the greatest impact on the output (salient dimensions)
Papernot et al., 2015
Threat Model:
Adversarial Knowledge
Model hyperparameters,
variables, training tools
Architecture
Training data
Black box
Increasing
attacker
knowledge
Deep Neural Network Attacks
Adversary
knowledge
Attack
complexity
DIFFICULT
EASY
Architecture,
Training Tools,
Hyperparameters
Architecture
Training data
Oracle
Labeled Test
samples
Confidence
Reduction
Untargeted
M
isclassification
Targeted
M
isclassification
Source/Target
M
isclassification
Murphy, 2012 Szegedy, 2014
Papernot, 2016
Goodfellow, 2016
Xu, 2016
Nguyen, 2014
What can you do with limited knowledge?
• Quite a lot.
• Make good guesses: Infer the methodology from the task
• Image classification: ConvNet
• Speech recognition: LSTM-RNN
• Amazon ML, ML-as-a-service etc.: Shallow feed-forward network
• What if you can’t guess?
STILL CAN PWN?
Black box attack methodology
1. Transferability
Adversarial samples that fool model A have a
good chance of fooling a previously unseen
model B
SVM, scikit-learn
Decision Tree
Matt's Webcorner, Stanford
Linear Classifier
(Logistic Regression)
Spectral Clustering, scikit-learn
Feed Forward Neural Network
Black box attack methodology
1. Transferability
Papernot et al. “Transferability in Machine Learning:
from Phenomena to Black-Box Attacks using Adversarial Samples”
Black box attack methodology
2. Substitute model
train a new model by treating the target model’s
output as a training labels
then, generate adversarial samples with
substitute model
input data
target
black box
model
training label
backpropagate
black box prediction
substitute
model
Why is this possible?
• Transferability?
• Still an open research problem
• Manifold learning problem
• Blind spots
• Model vs. Reality dimensionality mismatch
• IN GENERAL:
• Is the model not learning anything
at all?
What this means for us
• Deep learning algorithms (Machine Learning in general) are susceptible to
manipulative attacks
• Use with caution in critical deployments
• Don’t make false assumptions about what/how the model learns
• Evaluate a model’s adversarial resilience - not just accuracy/precision/recall
• Spend effort to make models more robust to tampering
Defending the machines
• Distillation
• Train model 2x, feed first DNN output logits into second DNN input layer
• Train model with adversarial samples
• i.e. ironing out imperfect knowledge learnt in the model
• Other miscellaneous tweaks
• Special regularization/loss-function methods (simulating adversarial content during training)
DEEP-PWNING“metasploit for machine learning”
WHY DEEP-PWNING?
• lol why not
• “Penetration testing” of statistical/machine learning systems
• Train models with adversarial samples for increased robustness
PLEASE PLAY WITH IT &
CONTRIBUTE!
Deep Learning and Privacy
• Deep learning also sees challenges in other areas relating to security &
privacy
• Adversary can reconstruct training samples from a trained black box
DNN model (Fredrikson, 2015)
• Can we precisely control the learning objective of a DNN model?
• Can we train a DNN model without the training agent having complete
access to all training data? (Shokri, 2015)
WHY IS THIS IMPORTANT?
WHY DEEP-PWNING?
• MORE CRITICAL SYSTEMS RELY ON MACHINE LEARNING → MORE IMPORTANCE
ON ENSURING THEIR ROBUSTNESS
• WE NEED PEOPLE WITH BOTH SECURITY AND STATISTICAL SKILL SETS TO
DEVELOP ROBUST SYSTEMS AND EVALUATE NEW INFRASTRUCTURE
LEARN IT OR BECOME IRRELEVANT
@cchio
MLHACKER

Weitere ähnliche Inhalte

Was ist angesagt?

An introduction to Machine Learning
An introduction to Machine LearningAn introduction to Machine Learning
An introduction to Machine Learning
butest
 
Machine Learning: Generative and Discriminative Models
Machine Learning: Generative and Discriminative ModelsMachine Learning: Generative and Discriminative Models
Machine Learning: Generative and Discriminative Models
butest
 
Transfer Learning and Fine-tuning Deep Neural Networks
 Transfer Learning and Fine-tuning Deep Neural Networks Transfer Learning and Fine-tuning Deep Neural Networks
Transfer Learning and Fine-tuning Deep Neural Networks
PyData
 
Deep learning - a primer
Deep learning - a primerDeep learning - a primer
Deep learning - a primer
Uwe Friedrichsen
 

Was ist angesagt? (20)

Adversarial examples in deep learning (Gregory Chatel)
Adversarial examples in deep learning (Gregory Chatel)Adversarial examples in deep learning (Gregory Chatel)
Adversarial examples in deep learning (Gregory Chatel)
 
PRACTICAL ADVERSARIAL ATTACKS AGAINST CHALLENGING MODELS ENVIRONMENTS - Moust...
PRACTICAL ADVERSARIAL ATTACKS AGAINST CHALLENGING MODELS ENVIRONMENTS - Moust...PRACTICAL ADVERSARIAL ATTACKS AGAINST CHALLENGING MODELS ENVIRONMENTS - Moust...
PRACTICAL ADVERSARIAL ATTACKS AGAINST CHALLENGING MODELS ENVIRONMENTS - Moust...
 
Introduction to Deep Learning
Introduction to Deep LearningIntroduction to Deep Learning
Introduction to Deep Learning
 
Deep Learning For Practitioners, lecture 2: Selecting the right applications...
Deep Learning For Practitioners,  lecture 2: Selecting the right applications...Deep Learning For Practitioners,  lecture 2: Selecting the right applications...
Deep Learning For Practitioners, lecture 2: Selecting the right applications...
 
Research of adversarial example on a deep neural network
Research of adversarial example on a deep neural networkResearch of adversarial example on a deep neural network
Research of adversarial example on a deep neural network
 
An introduction to Machine Learning (and a little bit of Deep Learning)
An introduction to Machine Learning (and a little bit of Deep Learning)An introduction to Machine Learning (and a little bit of Deep Learning)
An introduction to Machine Learning (and a little bit of Deep Learning)
 
Deep Learning - A Literature survey
Deep Learning - A Literature surveyDeep Learning - A Literature survey
Deep Learning - A Literature survey
 
Deep learning
Deep learningDeep learning
Deep learning
 
Introduction to deep learning
Introduction to deep learningIntroduction to deep learning
Introduction to deep learning
 
An introduction to Machine Learning
An introduction to Machine LearningAn introduction to Machine Learning
An introduction to Machine Learning
 
Deep Learning Architectures for NLP (Hungarian NLP Meetup 2016-09-07)
Deep Learning Architectures for NLP (Hungarian NLP Meetup 2016-09-07)Deep Learning Architectures for NLP (Hungarian NLP Meetup 2016-09-07)
Deep Learning Architectures for NLP (Hungarian NLP Meetup 2016-09-07)
 
Artificial Intelligence Course: Linear models
Artificial Intelligence Course: Linear models Artificial Intelligence Course: Linear models
Artificial Intelligence Course: Linear models
 
Transfer Learning: An overview
Transfer Learning: An overviewTransfer Learning: An overview
Transfer Learning: An overview
 
Machine Learning: Generative and Discriminative Models
Machine Learning: Generative and Discriminative ModelsMachine Learning: Generative and Discriminative Models
Machine Learning: Generative and Discriminative Models
 
A Multiscale Visualization of Attention in the Transformer Model
A Multiscale Visualization of Attention in the Transformer ModelA Multiscale Visualization of Attention in the Transformer Model
A Multiscale Visualization of Attention in the Transformer Model
 
Introduction To Applied Machine Learning
Introduction To Applied Machine LearningIntroduction To Applied Machine Learning
Introduction To Applied Machine Learning
 
Transfer Learning and Fine-tuning Deep Neural Networks
 Transfer Learning and Fine-tuning Deep Neural Networks Transfer Learning and Fine-tuning Deep Neural Networks
Transfer Learning and Fine-tuning Deep Neural Networks
 
Deep learning - a primer
Deep learning - a primerDeep learning - a primer
Deep learning - a primer
 
Scene understanding
Scene understandingScene understanding
Scene understanding
 
Deep Learning and the state of AI / 2016
Deep Learning and the state of AI / 2016Deep Learning and the state of AI / 2016
Deep Learning and the state of AI / 2016
 

Ähnlich wie Machine Duping 101: Pwning Deep Learning Systems

Deep learning - a primer
Deep learning - a primerDeep learning - a primer
Deep learning - a primer
Shirin Elsinghorst
 
Learn to Build an App to Find Similar Images using Deep Learning- Piotr Teterwak
Learn to Build an App to Find Similar Images using Deep Learning- Piotr TeterwakLearn to Build an App to Find Similar Images using Deep Learning- Piotr Teterwak
Learn to Build an App to Find Similar Images using Deep Learning- Piotr Teterwak
PyData
 
Deep Learning Made Easy with Deep Features
Deep Learning Made Easy with Deep FeaturesDeep Learning Made Easy with Deep Features
Deep Learning Made Easy with Deep Features
Turi, Inc.
 
Separating Hype from Reality in Deep Learning with Sameer Farooqui
 Separating Hype from Reality in Deep Learning with Sameer Farooqui Separating Hype from Reality in Deep Learning with Sameer Farooqui
Separating Hype from Reality in Deep Learning with Sameer Farooqui
Databricks
 

Ähnlich wie Machine Duping 101: Pwning Deep Learning Systems (20)

Deep learning - a primer
Deep learning - a primerDeep learning - a primer
Deep learning - a primer
 
Deep learning from a novice perspective
Deep learning from a novice perspectiveDeep learning from a novice perspective
Deep learning from a novice perspective
 
AI and Deep Learning
AI and Deep Learning AI and Deep Learning
AI and Deep Learning
 
Introduction to deep learning
Introduction to deep learningIntroduction to deep learning
Introduction to deep learning
 
Introduction to deep learning
Introduction to deep learningIntroduction to deep learning
Introduction to deep learning
 
Introduction to deep learning
Introduction to deep learningIntroduction to deep learning
Introduction to deep learning
 
Apache MXNet ODSC West 2018
Apache MXNet ODSC West 2018Apache MXNet ODSC West 2018
Apache MXNet ODSC West 2018
 
Deep Learning: Evolution of ML from Statistical to Brain-like Computing- Data...
Deep Learning: Evolution of ML from Statistical to Brain-like Computing- Data...Deep Learning: Evolution of ML from Statistical to Brain-like Computing- Data...
Deep Learning: Evolution of ML from Statistical to Brain-like Computing- Data...
 
Few shot learning/ one shot learning/ machine learning
Few shot learning/ one shot learning/ machine learningFew shot learning/ one shot learning/ machine learning
Few shot learning/ one shot learning/ machine learning
 
1. Introduction to deep learning.pptx
1. Introduction to deep learning.pptx1. Introduction to deep learning.pptx
1. Introduction to deep learning.pptx
 
Learn to Build an App to Find Similar Images using Deep Learning- Piotr Teterwak
Learn to Build an App to Find Similar Images using Deep Learning- Piotr TeterwakLearn to Build an App to Find Similar Images using Deep Learning- Piotr Teterwak
Learn to Build an App to Find Similar Images using Deep Learning- Piotr Teterwak
 
An Introduction to Deep Learning (May 2018)
An Introduction to Deep Learning (May 2018)An Introduction to Deep Learning (May 2018)
An Introduction to Deep Learning (May 2018)
 
in5490-classification (1).pptx
in5490-classification (1).pptxin5490-classification (1).pptx
in5490-classification (1).pptx
 
AI powered emotion recognition: From Inception to Production - Global AI Conf...
AI powered emotion recognition: From Inception to Production - Global AI Conf...AI powered emotion recognition: From Inception to Production - Global AI Conf...
AI powered emotion recognition: From Inception to Production - Global AI Conf...
 
AI powered emotion recognition: From Inception to Production - Global AI Conf...
AI powered emotion recognition: From Inception to Production - Global AI Conf...AI powered emotion recognition: From Inception to Production - Global AI Conf...
AI powered emotion recognition: From Inception to Production - Global AI Conf...
 
Deep Learning Made Easy with Deep Features
Deep Learning Made Easy with Deep FeaturesDeep Learning Made Easy with Deep Features
Deep Learning Made Easy with Deep Features
 
deepnet-lourentzou.ppt
deepnet-lourentzou.pptdeepnet-lourentzou.ppt
deepnet-lourentzou.ppt
 
MDEC Data Matters Series: machine learning and Deep Learning, A Primer
MDEC Data Matters Series: machine learning and Deep Learning, A PrimerMDEC Data Matters Series: machine learning and Deep Learning, A Primer
MDEC Data Matters Series: machine learning and Deep Learning, A Primer
 
Separating Hype from Reality in Deep Learning with Sameer Farooqui
 Separating Hype from Reality in Deep Learning with Sameer Farooqui Separating Hype from Reality in Deep Learning with Sameer Farooqui
Separating Hype from Reality in Deep Learning with Sameer Farooqui
 
deeplearning
deeplearningdeeplearning
deeplearning
 

Kürzlich hochgeladen

Kürzlich hochgeladen (20)

TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 

Machine Duping 101: Pwning Deep Learning Systems

  • 1. Machine Duping Pwning Deep Learning Systems CLARENCE CHIO MLHACKER @cchio
  • 2. adapted from Dave Conway HACKING SKILLS MATH & STATS SECURITY RESEARCHER HAX0R DATA SCIENTIST DOING IT
  • 3. ImageNet Google Inc. Google DeepMind UMass, Facial Recognition Nvidia DRIVE Nvidia Invincea Malware Detection System
  • 4. Deep Learning why would someone choose to use it? (Semi)Automated feature/representation learning One infrastructure for multiple problems (sort of) Hierarchical learning: Divide task across multiple layers Efficient, easily distributed & parallelized
  • 6. softmaxfunction [0.34, 0.57, 0.09] predicted class: 1 correct class: 0 np.argmax logits prediction 𝒇 hidden layer input layer output layer 𝑊 activation function bias units 4-5-3 Neural Net Architecture [17.0, 28.5, 4.50] 𝒇 𝒇 𝒇 𝒇 𝒇 𝒇 𝒇 𝒇 𝒇 𝒇 𝒇
  • 7. softmaxfunction [0.34, 0.57, 0.09] predicted class: 1 correct class: 0 np.argmax logits prediction 𝒇 hidden layer input layer output layer 𝑊 activation function bias units 4-5-3 Neural Net Architecture [17.0, 28.5, 4.50] 𝒇 𝒇
  • 8. Training Deep Neural Networks Step 1 of 2: Feed Forward 1. Each unit receives output of the neurons in the previous layer (+ bias signal) 2. Computes weighted average of inputs 3. Apply weighted average through nonlinear activation function 4. For DNN classifier, send final layer output through softmax function softmax function [0.34, 0.57, 0.09]logits prediction 𝒇 𝑊1 activation function [17.0, 28.5, 4.50] 𝒇 𝒇 activation function activation function input 𝑊2 b1 b2 bias unit bias unit
  • 9. Training Deep Neural Networks Step 2 of 2: Backpropagation 1. If the model made a wrong prediction, calculate the error 1. In this case, the correct class is 0, but the model predicted 1 with 57% confidence - error is thus 0.57 2. Assign blame: trace backwards to find the units that contributed to this wrong prediction (and how much they contributed to the total error) 1. Partial differentiation of this error w.r.t. the unit’s activation value 3. Penalize those units by decreasing their weights and biases by an amount proportional to their error contribution 4. Do the above efficiently with optimization algorithm e.g. Stochastic Gradient Descent 0.57 total error 𝒇 𝑊1 activation function 28.5 𝒇 𝒇 activation function activation function 𝑊2 b1 b2 bias unit bias unit
  • 10. DEMO
  • 11. Beyond Multi Layer Perceptrons Source: iOS Developer Library vImage Programming Guide Convolutional Neural Network
  • 12. Layered Learning Lee, 2009, “Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations."
  • 13. Beyond Multi Layer Perceptrons Source: LeNet 5, LeCun et al. Convolutional Neural Network
  • 14. Beyond Multi Layer Perceptrons Recurrent Neural Network neural network input output loop(s) • Just a DNN with a feedback loop • Previous time step feeds all intermediate and final values into next time step • Introduces the concept of “memory” to neural networks
  • 15. Beyond Multi Layer Perceptrons Recurrent Neural Network input output time steps network depth Y O L O !
  • 16. Beyond Multi Layer Perceptrons Disney, Finding Dory Long Short-Term Memory (LSTM) RNN • To make good predictions, we sometimes need more context • We need long-term memory capabilities without extending the network’s recursion indefinitely (unscalable) Colah’s Blog, “Understanding LSTM Networks"
  • 17. Deng et al. “Deep Learning: Methods and Applications”
  • 19. Attack Taxonomy ExploratoryCausative (Manipulative test samples) Applicable also to online-learning models that continuously learn from real-time test data (Manipulative training samples) Targeted Indiscriminate Training samples that move classifier decision boundary in an intentional direction Training samples that increase FP/FN → renders classifier unusable Adversarial input crafted to cause an intentional misclassification n/a
  • 20. Why can we do this? vs. Statistical learning models don’t learn concepts the same way that we do. BLINDSPOTS:
  • 21. Adversarial Deep Learning Intuitions 1. Run input x through the classifier model (or substitute/approximate model) 2. Based on model prediction, derive a perturbation tensor that maximizes chances of misclassification: 1. Traverse the manifold to find blind spots in input space; or 2. Linear perturbation in direction of neural network’s cost function gradient; or 3. Select only input dimensions with high saliency* to perturb by the model’s Jacobian matrix 3. Scale the perturbation tensor by some magnitude, resulting in the effective perturbation (δx) to x 1. Larger perturbation == higher probability for misclassification 2. Smaller perturbation == less likely for human detection * saliency: amount of influence a selected dimension has on the entire model’s output
  • 22. Adversarial Deep Learning Intuitions Szegedy, 2013: Traverse the manifold to find blind spots in the input space • Adversarial samples == pockets in the manifold • Difficult to efficiently find by brute force (high dimensional input) • Optimize this search, take gradient of input w.r.t. target output class
  • 23. Adversarial Deep Learning Intuitions Goodfellow, 2015: Linear adversarial perturbation • Developed a linear view of adversarial examples • Can just take the cost function gradient w.r.t. the sample (x) and original predicted class (y) • Easily found by backpropagation
  • 24. Adversarial Deep Learning Intuitions Papernot, 2015: Saliency map + Jacobian matrix perturbation • More complex derivations for why the Jacobian of the learned neural network function is used • Obtained with respect to input features rather than network parameters • Forward propagation is used instead of backpropagation • To reduce probability of human detection, only perturb the dimensions that have the greatest impact on the output (salient dimensions)
  • 26. Threat Model: Adversarial Knowledge Model hyperparameters, variables, training tools Architecture Training data Black box Increasing attacker knowledge
  • 27. Deep Neural Network Attacks Adversary knowledge Attack complexity DIFFICULT EASY Architecture, Training Tools, Hyperparameters Architecture Training data Oracle Labeled Test samples Confidence Reduction Untargeted M isclassification Targeted M isclassification Source/Target M isclassification Murphy, 2012 Szegedy, 2014 Papernot, 2016 Goodfellow, 2016 Xu, 2016 Nguyen, 2014
  • 28. What can you do with limited knowledge? • Quite a lot. • Make good guesses: Infer the methodology from the task • Image classification: ConvNet • Speech recognition: LSTM-RNN • Amazon ML, ML-as-a-service etc.: Shallow feed-forward network • What if you can’t guess?
  • 30. Black box attack methodology 1. Transferability Adversarial samples that fool model A have a good chance of fooling a previously unseen model B SVM, scikit-learn Decision Tree Matt's Webcorner, Stanford Linear Classifier (Logistic Regression) Spectral Clustering, scikit-learn Feed Forward Neural Network
  • 31. Black box attack methodology 1. Transferability Papernot et al. “Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples”
  • 32. Black box attack methodology 2. Substitute model train a new model by treating the target model’s output as a training labels then, generate adversarial samples with substitute model input data target black box model training label backpropagate black box prediction substitute model
  • 33. Why is this possible? • Transferability? • Still an open research problem • Manifold learning problem • Blind spots • Model vs. Reality dimensionality mismatch • IN GENERAL: • Is the model not learning anything at all?
  • 34. What this means for us • Deep learning algorithms (Machine Learning in general) are susceptible to manipulative attacks • Use with caution in critical deployments • Don’t make false assumptions about what/how the model learns • Evaluate a model’s adversarial resilience - not just accuracy/precision/recall • Spend effort to make models more robust to tampering
  • 35. Defending the machines • Distillation • Train model 2x, feed first DNN output logits into second DNN input layer • Train model with adversarial samples • i.e. ironing out imperfect knowledge learnt in the model • Other miscellaneous tweaks • Special regularization/loss-function methods (simulating adversarial content during training)
  • 37. WHY DEEP-PWNING? • lol why not • “Penetration testing” of statistical/machine learning systems • Train models with adversarial samples for increased robustness
  • 38. PLEASE PLAY WITH IT & CONTRIBUTE!
  • 39. Deep Learning and Privacy • Deep learning also sees challenges in other areas relating to security & privacy • Adversary can reconstruct training samples from a trained black box DNN model (Fredrikson, 2015) • Can we precisely control the learning objective of a DNN model? • Can we train a DNN model without the training agent having complete access to all training data? (Shokri, 2015)
  • 40. WHY IS THIS IMPORTANT?
  • 41. WHY DEEP-PWNING? • MORE CRITICAL SYSTEMS RELY ON MACHINE LEARNING → MORE IMPORTANCE ON ENSURING THEIR ROBUSTNESS • WE NEED PEOPLE WITH BOTH SECURITY AND STATISTICAL SKILL SETS TO DEVELOP ROBUST SYSTEMS AND EVALUATE NEW INFRASTRUCTURE
  • 42. LEARN IT OR BECOME IRRELEVANT
  • 43.