Diese Präsentation wurde erfolgreich gemeldet.
Wir verwenden Ihre LinkedIn Profilangaben und Informationen zu Ihren Aktivitäten, um Anzeigen zu personalisieren und Ihnen relevantere Inhalte anzuzeigen. Sie können Ihre Anzeigeneinstellungen jederzeit ändern.

Mozfest 2018 session slides: Let's fool modern A.I. systems with stickers.

103 Aufrufe

Veröffentlicht am

The goal of the session was to demystify Machine Learning for the participants and show them a real Machine Learning system in action. The secondary goal is to show that Machine Learning is itself just another tool, susceptible to adversarial attacks. These can have huge implications, especially in a world with self-driving cars and other automation. The session aims to be highly collaborative and audience-driven and can be adjusted to suit the participants' familiarity with machine learning and coding.

Veröffentlicht in: Technologie
  • Als Erste(r) kommentieren

  • Gehören Sie zu den Ersten, denen das gefällt!

Mozfest 2018 session slides: Let's fool modern A.I. systems with stickers.

  1. 1. Let’s fool modern A.I. Systems with Stickers Anant Jain Co-founder, Commonlounge.com https://anantja.in @anant90
  2. 2. Download “Demitasse” bit.ly/image-recog Download VGG-CNN-F (Binary Compression) model data (106 MB)
  3. 3. Let’s list all the apps where you see ML systems being used Activity Group them by the type of task they try to accomplish
  4. 4. What exactly is “learnt” in Machine Learning? Discussion
  5. 5. Source: http://www.cleverhans.io/security/privacy/ml/2016/12/16/breaking-things-is-easy.html
  6. 6. What exactly is “learnt” in Machine Learning? Discussion Feed-forward neural network 1. Neural Network
  7. 7. What exactly is “learnt” in Machine Learning? Discussion Feed-forward neural network 1. Neural Network 2. Weights 3. Back-propagation
  8. 8. What exactly is “learnt” in Machine Learning? Discussion Feed-forward neural network 1. Neural Network 2. Weights 3. Cost Function
  9. 9. What exactly is “learnt” in Machine Learning? Discussion Feed-forward neural network 1. Neural Network 2. Weights 3. Cost Function 4. Gradient Descent
  10. 10. What exactly is “learnt” in Machine Learning? Discussion Feed-forward neural network 1. Neural Network 2. Weights 3. Cost Function 4. Gradient Descent 5. Back Propagation
  11. 11. Source: http://www.cleverhans.io/security/privacy/ml/2016/12/16/breaking-things-is-easy.html
  12. 12. What exactly is “learnt” in Machine Learning? Discussion How do we break it?
  13. 13. Source: http://www.cleverhans.io/security/privacy/ml/2016/12/16/breaking-things-is-easy.html
  14. 14. Source: http://www.cleverhans.io/security/privacy/ml/2016/12/16/breaking-things-is-easy.html
  15. 15. Demo
  16. 16. What are the implications of these attacks? Discussion
  17. 17. What are the implications of these attacks? Discussion •Self Driving Cars: A patch may make a car think that a Stop Sign is a Yield Sign •Alexa: Voice-based Personal Assistants: Transmit sounds that sound like noise, but give specific commands (video) •Ebay: Sell livestock and other banned substances.
  18. 18. How do you defend ML systems from these attacks? Discussion
  19. 19. How do you defend ML systems from these attacks? Discussion • Adversarial training seeks to improve the generalization of a model when presented with adversarial examples at test time by proactively generating adversarial examples as part of the training procedure.
  20. 20. How do you defend ML systems from these attacks? Discussion • Adversarial training seeks to improve the generalization of a model when presented with adversarial examples at test time by proactively generating adversarial examples as part of the training procedure. • Defensive distillation smooths the model’s decision surface in adversarial directions exploited by the adversary.
  21. 21. Discussion Privacy issues in ML (and how the two can be unexpected allies)
  22. 22. Privacy issues in ML (and how the two can be unexpected allies) Discussion • Lack of fairness and transparency when learning algorithms process the training data.
  23. 23. Privacy issues in ML (and how the two can be unexpected allies) Discussion • Lack of fairness and transparency when learning algorithms process the training data. • Training data leakage: How do you make sure that ML Systems do not memorize sensitive information about the training set, such as the specific medical histories of individual patients? Differential Privacy
  24. 24. PATE (Private Aggregator of Teacher Ensembles)
  25. 25. Machine Learning is itself just another tool, susceptible to adversarial attacks. These can have huge implications, especially in a world with self-driving cars and other automation. Summary
  26. 26. Generative Adversarial Networks (GANs) Bonus
  27. 27. Generative Adversarial Networks (GANs)
  28. 28. Applications of GANs Bonus
  29. 29. Applications of GANs Bonus • Creativity suites (Photo, video editing): Interactive image editing (Adobe Research), Fashion, Digital Art (Deep Dream 2.0) • 3D objects: Shape Estimation (from 2D images), Shape Manipulation • Medical (Insilico Medicine): Drug discovery, Molecule development • Games / Simulation: Generating realistic environments (buildings, graphics, etc), includes inferring physical laws, and relation of objects to one another • Robotics: Augmenting real-world training with virtual training

×