2. Overview
2
• Introduction to Security Development Lifecycle
• SDL Threat Modeling Process
• SDL Threat Modeling Tool
3. History of MS Security Development Lifecycle process
3
4. 4
Education Accountability
Administer and track
security training
Incident
Response
(MSRC)
Establish release criteria
and sign-off as part of
FSR
Ongoing Process Improvements
Process
Guide product teams to
meet SDL requirements
SDL conforms to ISO/IEC 27034-1:2011
5. Microsoft Confidential
1. Core
Security
Training
2. Establish
Security
Requirements
5. Establish
Design
Requirements
8. Use
approved
tools
11. Perform
Dynamic
Analysis
14. Create an
incident
response
plan.
17. Execute
Incidence
Response
Plan
3. Quality
Gates / Bug
Bars Create
6. Perform
Attack
Surface
Analysis /
Reduction
9. Deprecate
Unsafe
functions
12. Perform
Fuzz Testing
15. Conduct
Final Security
Review.
4. Perform
Security and
Privacy Risk
Assessments
7. Use Threat
Modeling
10. Perform
static analysis
13. Conduct
attack surface
review.
16. Certify
Release and
Archive
Every-Sprint Practices One-Time PracticesBucket Practices
SDL in Agile development processes
7. SDL Threat
Modeling
Overview
7
SDL Threat Modeling: A process to understand
security threats to a system, determine risks from
those threats, and establish appropriate
mitigations.
Threat modelling works to identify, communicate,
and understand threats and mitigations within the
context of protecting something of value.
owasp.org
8. What?
8
• Description / design / model of what you’re worried
about
• List of assumptions that can be checked or
challenged in the future as the threat landscape
changes
• List of potential threats to the system
• List of actions to be taken for each threat
• Way of validating the model and threats, and
verification of success of actions taken
9. Who
Performs /
Drives Threat
Modeling?
9
• The SDL Threat Modeling process can be
performed by:
• Security experts
• Non-security experts
• The threat modeling process should be driven
by:
• Application designers; however, developers and
testers should be involved
10. Who?
Roles
10
• People who are building the System
• People who are/will be testing the System
• People who understand the Business Requirements
• People who are tracking and managing progress
Session Type Architect
Program
Manager
Software
Test
Penetration
Test
Developer
Security
Consultant
Requirements O O V P V
Model P P O V O
Threat
Enumeration
P P V O V
Mitigations P P O V O
Validate O O P P P V
O = Own
P = Participate
V = Validate
11. Who?
Customers
11
Customers for threat models:
• Your team
• Other feature/product teams
• Customers, via user education
• ‘External’ QA resources like pen testers
• Security Advisors
12. Why?
12
• Produce software that’s secure by design
Improve designs the same way we’ve improved code
• Document and discuss security in a structured way
• Bring Security and Development together
• Identify and document threats and compliance requirements
and evaluate their risks
• Find and document mitigation
• Balance risks, controls, and usability
• Ensure business requirements (or goals) are adequately
protected
• Because attackers think differently
Creator blindness/new perspective
13. When to Threat Model?
13
• Best performed during the application design phase
• Easiest to make application changes
• Less costly than adding mitigations and testing them after code
has been implemented and onwards
• Motto: The sooner the better, but never too late!
15. The Threat
Modeling
Process
15
Gather
Requirements
•Identify what the
system should do
•Remember to
include Security
Requirements as
well!
Model System
•Define how
information flow,
who interacts with
it and where it is
stored
Enumerate
Threats
•Identify the threats
to the System and
treat them as BUGS!
•Prioritize threats
Identify
Mitigations
•Identify strategies
to reduce
probability and/or
impact of the
exploitation of the
vulnerabilities
Validate
•Check that
everything is ok
16. Step 1:
Model
16
• Data flow diagrams (DFDs)
• Include processes, data stores, data flows
• Most attacks based on data flowing through an
application or system
• Trust boundaries
• Update diagrams as product changes
• Possible multiple layers/levels of details
Step Objective: To model an application design
as a data flow diagram to drive threat analysis
17. Step 1:
Model
17
"Essentially, all models are wrong, but
some are useful."
--- Box, George E. P.; Norman R. Draper (1987). Empirical Model-Building and Response
Surfaces, p. 424, Wiley. ISBN 0471810339.
18. Data Flow
Diagrams
(DFDs)
Elements
18
Element Represented By Description
External
Entity
Any entity not within the
control of the application, such
as people and external systems
Process Code, such as native code
executables and .NET
assemblies
Data Store Data at rest, such as registry
keys and databases
Data Flow How data flows between
elements, such as function calls
and network data
19. Additional
Element:
Trust
Boundaries
19
Element Represented By Description
Trust
Boundary
A point within an
application where data
flows from one privilege
level to another, such as
network sockets, external
entities and processes
with different trust levels
Examples:
• Machine boundaries, privilege boundaries, integrity boundaries are
examples of trust boundaries
• Threads in a native process are often inside a trust boundary, because they
share the same privileges, identifiers and access
• Processes talking across a network always have a trust boundary
• Trusted code reading from untrusted code
• Validate everything for specific uses
• Trusted code writing to untrusted code
• Make sure your errors don’t give away too much
20. 20
• People
• Other systems
• Microsoft.com
• Function call
• Network traffic
• Remote
Procedure Call
(RPC)
• DLLs
• EXEs
• PHP, JAVA,
Python, .NET
etc.
• Services
• Web Services
• Database
• File
• Registry
• Shared
Memory
• Queue / Stack
External
Entity
Process
Data
Flow Data Store
Trust Boundary
• Process boundary
• File system
21. Step 2:
Threat
Enumeration
21
• Experts: Brainstorming and other informal methods
• Experts and Non-Experts: STRIDE threat types
• Based on Microsoft Security Response Center (MSRC)
issues and Common Vulnerability and Exposures (CVE)
(see http://cve.mitre.org for more information)
• Other methods like Kill Chains, CAPEC, P.A.S.T.A., Trike,
VAST
Step Objective: To identify threats for each data
flow diagram element in the threat model
22. STRIDE
Threat Types
22
Desired Property Threat Definition
Authentication Spoofing
Impersonating something or someone
else
Integrity Tampering
Modifying code or data without
authorization
Non-repudiation Repudiation
The ability to claim to have not
performed some action against an
application
Confidentiality Information
Disclosure
The exposure of information to
unauthorized users
Availability Denial of Service
The ability to deny or degrade a
service to legitimate users
Authorization Elevation of
Privilege
The ability of a user to elevate their
privileges with an application without
authorization
* Framework, not classification scheme. STRIDE is a good framework, but bad taxonomy.
23. 23
Threat Property Definition Example
Spoofing Authentication Impersonating
something or someone
else.
Pretending to be any of billg, xbox.com or a
system update
Tampering Integrity Modifying data or code Modifying a game config file on disk, or a packet
as it traverses the network
Repudiation Non-repudiation Claiming to have not
performed an action
“I didn’t cheat!”
Information Disclosure Confidentiality Exposing information to
someone not
authorized to see it
Reading key material from an app
Denial of Service Availability Deny or degrade
service to users
Crashing the web site, sending a packet and
absorbing seconds of CPU time, or routing packets
into a black hole
Elevation of Privilege Authorization Gain capabilities
without proper
authorization
Allowing a remote internet user to run commands
is the classic example, but running kernel code
from lower trust levels is also EoP
24. Element S T R I D E
External entity
Process
Data Store
Data Flow
Identifying
STRIDE Threats
by Data Flow
Diagram
Element Type
24
* graphical representation of the required STRIDE threats that must be investigated
26. Document
and
categorize
threats
26
Threat Description
Attacker elevates privilege by leveraging the service client request
process
Threat Target Service Client Request (5.0)
Threat Category Elevation of privilege
Risk
Damage Potential: 10
Reproducibility: 2
Exploitability: 2
Affected Users: 1
Discoverability: 10
Overall: 5
Comments
The threat target in question runs in a Web server process, and the code
runs in the Local System context. This means that any malicious code
executing in the context of the Web server is Local System on the
computer also. Reproducibility and exploitability are low because the
only realistic way to exploit this is for the attacker to take advantage of a
security vulnerability in the Web server process.
The low affected users count is because only this server is affected,
although one could argue that everyone could be affected by this if an
attacker compromised the server.
29. Distractions
30
Do not “worry” about threats like:
• The computer is infected with malware
• Antivirus scanner is outdated
• Someone removed the hard drive and tampers
data
• Admin is not trustworthy
• A user is attacking himself
• Social Engineering
“The 10 Immutable Laws of Security”,
http://technet.microsoft.com/en-us/magazine/2008.10.securitywatch.aspx.
31. How to play?
32
• Deal out all the cards
• Play hands (once around the table)
Connect the threat on a card to the diagram
Play in a hand stays in the suit
• Play once through the deck
• Take notes
Player | Points | Card | Component | Notes
36. Rules
37
• Must play in suit if you can
• High card wins the hand
Unless there’s a joker (elevation of
privilege card)
• Aces are for threats not listed on the cards
• 1 point for each threat, 1 for the hand
37. Why does the
game work
as a tool?
38
• Attractive and cool
• Encourages flow
• Requires participation
Threats act as hints
Instant feedback
• Social permission for
Playful exploration
Disagreement
• Produces real threat models
38. Step 3:
Mitigation
39
• Approaches to threat mitigation (in order of
preference):
• Redesign
• Use standard mitigations
• Use unique/custom mitigations
• Accept risk in accordance with policies
Step Objective: To address identified threats to
an application design
39. Examples of
Standard
Mitigations
40
Threat Example Standard Mitigations
Spoofing
To authenticate principals:
• OpenID authentication
• Windows authentication (NTLM)
• Kerberos authentication
• PKI systems such as SSL/TLS and certificates
• IPSec
• Digitally signed packets
To authenticate code or data:
• Digital signatures
• Message authentication codes
• Hashes
Tampering
• Windows Mandatory Integrity Controls
• ACLs
• Digital signatures
• Message Authentication Codes
Repudiation
• Strong Authentication
• Secure logging and auditing
• Digital Signatures
• Secure time stamps
• Trusted third parties
Information Disclosure
• Encryption
• ACLS
Denial of Service
• ACLs
• Filtering
• Quotas
• Authorization
• High availability designs
Elevation of Privilege
• ACLs
• Group or role membership
• Privilege ownership
• Permissions
• Input validation
There are great threat
(and mitigation)
libraries (OWASP, CVE
etc).
40. Inventing
Mitigations
41
• It’s always risky!
• Mitigations are an area of expertise such as
networking, databases, or cryptography
• Amateurs make mistakes, so do professionals
• Mitigation failures will appear to work until an
expert looks at them
• When you need to invent mitigations, get
expert help
41. Accepting
the Threat
42
• Accepting the threat could be OK IF
cost to mitigate is higher THAN
Exploitation Probability AND Impact
• Let Stakeholders and/or Users know
about it anyway
42. How to select
threat that
could be left
without
mitigation?
43
Prioritize the Threats:
• Do the easy fixes first
• Average DREAD method
• Probability x Impact DREAD method
• [put your other more complex methods
here]
43. DREAD
44
Factor Description
Damage Potential What is the damage that could be done?
Reproducibility How easy is to reproduce the attack?
Exploitability How hard is to meet the conditions for the attack to succeed?
Affected Users How large and how important are the affected users?
Discoverability How easy is for external researchers/attackers to discover the
vulnerability?
45. Distractions
46
Mitigation is used to
• Address or alleviate a problem
• Protect customers
When designing secure software, it is not
enough to:
• Create a great model
• Identify lots of threats
46. Step 4:
Validation
47
• Validation of the model
• Validation of enumerated threats
• Validation of mitigations
• Validation of assumptions
Step Objective: To help ensure that threat
models accurately reflect application design and
potential threats
49. Validating the
Threat Model
50
Does the threat model demonstrate a sufficient
level of understanding of product security?
• Does the diagram match final architecture?
• Are all threats enumerated?
Minimum: STRIDE per element
• Has Test or QA reviewed the model?
• Is each threat mitigated?
• Are mitigations done right?
Were these checked before Final Security Review?
• Shipping will be more predictable
50. Validating
Quality of
Threats and
Mitigations
51
Verify Assumptions
Good Threats describe:
• The attack
• The context
• The impact
Good Mitigations:
• Associated with a threat
• Describe the mitigations
• File a bug/task
53. Microsoft SDL
Threat
Modeling
Advantages and
Disadvantages
56
Advantages
• Can be used to find
threats to a system
early in the
development process
• Can be used by both
security experts and
non-security experts
• Can be used to guide
other security
assessment activities
Disadvantages
• Upfront costs
(training, software,
and setup)
54. Conclusion
57
• Overview of the Microsoft SDL Threat Modeling
process
• Advantages and disadvantages
• Steps of the Microsoft SDL Threat Modeling process
• Microsoft Threat Modeling Tool
• Microsoft SDL threat modeling requirements
• Overview of the Microsoft SDL Threat Modeling Tool
• Benefits and features