“AI is the new electricity” proclaims Andrew Ng, co-founder of Google Brain. Just as we need to know how to safely harness electricity, we also need to know how to securely employ AI to power our businesses. In some scenarios, the security of AI systems can impact human safety. On the flip side, AI can also be misused by cyber-adversaries and so we need to understand how to counter them.
This talk will provide food for thought in 3 areas:
Security of AI systems
Use of AI in cybersecurity
Malicious use of AI
2. Outline
• Cybersecurity – a quick recap
• Overview of AI
• Security of AI systems
• AI-aided Attacks/Maliciousness
• Use of AI in CyberSecurity
• Demo
#ISSLearningFest
3. Cybersecurity – a quick recap
C
I
PRIVACY
SAFETY
A
CONFIDENTIALITY
Ensuring that information is accessible only to
those authorised to have access.
INTEGRITY
Safeguarding the
correctness and
completeness of
information and
processing methods.
AVAILABILITY
Ensuring that authorised
users have access to
information and associated
assets when required in
whatever form required
5. Artificial Intelligence (AI)
• Artificial General Intelligence
• Do anything a human can do
• Artificial Narrow Intelligence
• Computer Vision (e.g. object recognition as in face recognition)
• Speech (e.g. smart speaker)
• Natural Language Processing (e.g. sentiment analysis, machine translation)
• Self-driving car, autonomous vehicles
6. Example: Social Distancing Detector
https://landing.ai/landing-ai-creates-an-ai-tool-to-help-customers-monitor-social-distancing-in-the-workplace/
8. Example
• 20 lawyers vs LawGeex AI
• Review 5 NDAs in 4 hours.
• 3213 clauses
• Result:
Source: https://blog.lawgeex.com/ai-more-accurate-than-lawyers/
AI Lawyers
Accuracy 94% Avg 85%
Time taken
to review all
NDAs
26 seconds Avg 92
minutes
9. Singapore’s National AI Strategy
Ref: National Artificial Intelligence Strategy - Advancing our Smart Nation Journey, Summary
10. AI, Machine Learning, Deep Learning
AI
Machine
Learning
Deep
Learning
Algorithms with ability to learn without
being explicitly programmed.
• Supervised Learning
• Unsupervised Learning
• Reinforcement Learning
• Deep Neural Networks (DNN)
13. Security of AI Systems
Exploring the additional attack surface, if any, resulting from utilizing AI
#ISSLearningFest
14. Threats (illustrative)
Prepare Training
DATA
Train the Model
• Training Set Poisoning
• DNN backdoors
• Trojaned DNN
• Privacy Concerns
• Adversarial Examples
• Physical Adversarial Examples
• Reprogramming of Neural
Networks
• Model Stealing / Model Extraction
• Model Inversion
• Membership Inference Attack
Input Trained
Model Output
15. Adversarial Example (Image Classification)
Fast Gradient Sign Method (FGSM)Source: EXPLAINING AND HARNESSING ADVERSARIAL EXAMPLES
Ian J. Goodfellow, et al. ICLR 2015
Adversarial Perturbation Adversarial Example
17. Adversarial Examples …
• … affect the integrity of the ML model
• Could lead to various cybersecurity risks and corresponding business impact
such as …
#ISSLearningFest
18. Impersonation
• Adversarial Example Attack against Face Recognition System (FRS), which
could be part of an access control or surveillance system, via
“adversarial” eyeglass frame to Impersonate a target.
19. Impersonation
Impersonation of target (cont’d)
Source:
Accessorize to a Crime: Real and
Stealthy Attacks on State-of-the-Art
Face Recognition.
Mahmood Sharif, et al
Oct 2016
20. Impersonation, Dodging
• Adversarial perturbation by project
infrared dots on attacker’s face to
induce misclassification by Face
Recognition System.
• Impersonation
• Dodging
Source:
21. Safety Issues
• Autonomous vehicle may fail to “see” the stop sign because the ML-based
model misclassifies the adversarially perturbed stop sign as a speed limit
sign.
23. Transferability of Adversarial Examples
• Adversarial examples that affect one model often affect another model
trained to perform the same task, even if the 2 models have
• Different architectures
• Different training data
x
1
x0
x2
x1
xn
y
x0
25. Security will be one of the biggest challenges in deploying AI
Dawn Song
Professor
Computer Science Division
University of California, Berkeley
26. New Challenges
• “Traditional software attack vectors are still critical to address, but they do
not provide sufficient coverage in the AI/ML threat landscape.”
• “The tech industry must avoid fighting next-gen issues with last-gen solutions
by building new frameworks and adopting new approaches which address
gaps in the design and operation of AI/ML-based services.”
Source: Securing the Future of Artificial Intelligence and Machine Learning at Microsoft
27. • Meanwhile, the following slide provides additional food for thought in this
area…
28. Protecting AI Initiatives
• How are we protecting our AI-based products, tools, and services?
• How do we keep our training data pristine and protect against biased inputs and poisoning?
• How do we protect the algorithms (or their implementation)?
• Do we have control procedures that stop abnormal events from happening and a Plan B in case we
notice that our AI programs are behaving abnormally?
• Do we have the technical and human monitoring capabilities to detect if our AI has been tampered
with?
• Have we made conscious decisions about who (or what) can decide and control which capabilities? Did
we assign AI systems an appropriate responsibility matrix entry? Do we constrain AI to decision support
or expert systems, or do we let AI programs make decisions on their own (and if so, which ones)?
• Do we have the appropriate governance policies and an agreed code of conduct that specify which of
our processes or activities are off-limits for AI for security reasons?
• When using AI in conjunction with decisions on cyber-physical systems, do we have appropriate ethical,
process, technical, and legal safeguards in place? Do we have compensating controls? How do we test
them?
• Have we aligned our cybersecurity organization, processes, policies, and technology to include AI, to
protect AI, and to protect us from AI malfunctions?
Source: https://www.bcg.com/en-sea/publications/2018/artificial-intelligence-threat-cybersecurity-solution.aspx
30. Examples
• Impersonation
• Speech synthesis systems that learn to imitate individuals’ voices
• Deepfake videos
• Generative Adversarial Network (GAN)-based tools
31. • Criminals used artificial intelligence-based software to impersonate a chief
executive’s voice and demand a fraudulent transfer of €220,000 ($243,000) in
March in what cybercrime experts described as an unusual case of artificial
intelligence being used in hacking.
• The CEO of a U.K.-based energy firm thought he was speaking on the phone
with his boss, the chief executive of the firm’s German parent company, who
asked him to send the funds to a Hungarian supplier. The caller said the request
was urgent, directing the executive to pay within an hour, according to the
company’s insurance firm, Euler Hermes Group SA.
32. Deepfake Videos
• Did Obama really say this?
Ref:
https://www.youtube.com/
watch?v=cQ54GDm1eL0
33. GAN-based tools - Examples
• MalGAN
• Generates malware that can bypass ML-based malware detectors
• PassGAN
• autonomously learn the distribution of real passwords from actual password leaks and
to generate high-quality password guesses
35. Use of AI in Cybersecurity – some examples
• Malware detection
and classification
• Spam identification
• Tier 1 analyst automation
• User and Entity Behaviour Analytics
(UEBA)
36. But it may not be robust…
DefCon AI Village (2019)
Machine Learning Static
Evasion Competition
Use hybrid approach
• AI/ML for the unknown
• Verify with tried and tested classical techniques
37. Summary
• Security will be one of the biggest challenges in deploying AI.
• New challenges require new approaches
• Malicious use of AI and AI-powered attacks must be considered as part
of an organization’s cybersecurity risk assessment.
• Consider ML-based security solutions as an augmentation (not
replacement) of your traditional security solutions and security staff.