Diese Präsentation wurde erfolgreich gemeldet.
Wir verwenden Ihre LinkedIn Profilangaben und Informationen zu Ihren Aktivitäten, um Anzeigen zu personalisieren und Ihnen relevantere Inhalte anzuzeigen. Sie können Ihre Anzeigeneinstellungen jederzeit ändern.

Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid

506 Aufrufe

Veröffentlicht am

Slides from the talk given by Battista Biggio at the 2017 ICCV Workshop ViPAR

Veröffentlicht in: Ingenieurwesen
  • If you’re struggling with your assignments like me, check out ⇒ www.HelpWriting.net ⇐.
       Antworten 
    Sind Sie sicher, dass Sie …  Ja  Nein
    Ihre Nachricht erscheint hier
  • Have you ever used the help of ⇒ www.WritePaper.info ⇐? They can help you with any type of writing - from personal statement to research paper. Due to this service you'll save your time and get an essay without plagiarism.
       Antworten 
    Sind Sie sicher, dass Sie …  Ja  Nein
    Ihre Nachricht erscheint hier
  • Gehören Sie zu den Ersten, denen das gefällt!

Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid

  1. 1. Pattern Recognition and Applications Lab University of Cagliari, Italy Department of Electrical and Electronic Engineering Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid 1 2017 ICCV Workshop ViPAR, Venice, Oct. 23, 2017 Marco Melis, Ambra Demontis, Battista Biggio, Gavin Brown, Giorgio Fumera, Fabio Roli battista.biggio@diee.unica.it Dept. Of Electrical and Electronic Engineering University of Cagliari, Italy @biggiobattista
  2. 2. http://pralab.diee.unica.it @biggiobattista 2 The iCub is the humanoid robot developed at the Italian Institute of Technology as part of the EU project RobotCub and adopted by more than 20 laboratories worldwide. It has 53 motors that move the head, arms and hands, waist, and legs. It can see and hear, it has the sense of proprioception (body configuration) and movement (using accelerometers and gyroscopes). [http://www.icub.org] The object recognition system of iCub uses visual features extracted with CNN models trained on the ImageNet dataset [G. Pasquale et al. MLIS 2015] The iCub Humanoid
  3. 3. http://pralab.diee.unica.it @biggiobattista 3 The iCub Robot-Vision System
  4. 4. http://pralab.diee.unica.it @biggiobattista 4 [http://old.iit.it/projects/data-sets]The iCubWorld28 Dataset
  5. 5. http://pralab.diee.unica.it @biggiobattista Crafting the Adversarial Examples • Key idea: shift the attack sample towards the decision boundary – under a maximum input perturbation (Euclidean distance) • Multiclass boundaries are obtained as the difference between the competing classes (e.g., one-vs-all multiclass classification) 5 f1 f2 f3 f1-f3
  6. 6. http://pralab.diee.unica.it @biggiobattista Error-generic Evasion • Error-generic evasion – k is the true class (blue) – l is the competing (closest) class in feature space (red) • The attack minimizes the objective to have the sample misclassified as the closest class (could be any!) 6 1 0 1 1 0 1 Indiscriminate evasion
  7. 7. http://pralab.diee.unica.it @biggiobattista Error-specific Evasion • Error-specific evasion – k is the target class (green) – l is the competing class (initially, the blue class) • The attack maximizes the objective to have the sample misclassified as the target class 7 max 1 0 1 1 0 1 Targeted evasion
  8. 8. http://pralab.diee.unica.it @biggiobattista 8 ∇fi (x) = ∂fi(z) ∂z ∂z ∂x f1 f2 fi fc ... ... Gradient-based Evasion Attacks • Solved with projected gradient-based optimization algorithm
  9. 9. http://pralab.diee.unica.it @biggiobattista 9 An adversarial example from class laundry-detergent, modified with our algorithm to be misclassified as cup Adversarial Examples against the iCub
  10. 10. http://pralab.diee.unica.it @biggiobattista 10 Adversarial example generated by manipulating only a specific region, to simulate a sticker that could be applied to the real-world object This image is classified as cup The ‘Sticker’ Attack against iCub
  11. 11. http://pralab.diee.unica.it @biggiobattista Why ML is Vulnerable to Evasion? • Attack samples far from training data are anyway assigned to ‘legitimate’ classes • Rejecting such blind-spot evasion points should improve security! 11 1 0 1 1 0 1 SVM-RBF (higher rejection rate) 1 0 1 1 0 1 SVM-RBF (no reject)
  12. 12. http://pralab.diee.unica.it @biggiobattista 12 Countering Adversarial Examples maximum input perturbation (Euclidean distance) visually-indistinguishable perturbations Error-specific evasion (similar results for error-generic attacks)
  13. 13. http://pralab.diee.unica.it @biggiobattista Conclusions and Future Work • Adversarial Examples against iCub • Countermeasure based on rejecting blind-spot evasion attacks • Main open issue: instability of deep features 13 small changes in input space (pixels) aligned with the gradient direction... ... correspond to large changes in deep feature space!
  14. 14. http://pralab.diee.unica.it @biggiobattista https://sec-ml.pluribus-one.it/ 14

×