5. A model in science is a physical, mathematical, or logical representation of a systemof entities, phenomena, or processes. Basically a model is a simplified abstract view of the complex reality Breadth over depth Represents criteria specific to analysis What is Modeling?
6. A threat model is: “A systematic, non-provable, internally consistent method of modeling a system, enumerating risks against it, and prioritising them.” Systematic Non-Provable Internally Consistent System Model Risk Enumeration Prioritisation What is Threat Modeling?
8. Usual Drivers of Controls Audit reports Prioritises: financial systems, audit house priorities, auditor skills, rotation plan, known systems Vendor marketing Prioritises: new problems, overinflated problems, product as solution New Attacks Prioritises: popular vulnerability research, complex attacks Individual Experience Prioritises: past experience, new systems, individual motives Why Threat Model?
9. Threat Modeling provides: All (most) information security risks systematic enumeration of risks Prioritisation of risks puts known risks in their place & compares new risks Justification no appeal to expert authority Decision Making scenario modeling to test decisions Education Can involve whole team Why Threat Model?
10. Developed for consultative role i.e. likely not the person making the changes Focus is on: Providing decision making information Rapid initial model creation Hybrid approach Bit of all the others Some parts we just threw out Highly flexible Initially, due to uncertainty, increasingly less so Detailed & aggregated results Includes test plan for verification SensePost CTM Design Goals
12. Entity Overview Locations Controls Users enforceable trust Interfaces method of system access asset value Attacks likelihood Damage Tests certainty relevance
13. Represent the trust (controls) of a location Interfaces are exposed at locations Users are present at locations Three types: Physical Data centers, Head Office, Remote Sites Network Internet, DMZ, Server Network, User Network Logical / Functional (new) Represent controls within authorisation levels Administrative, authenticated, unauthenticated access Locations
14. Enforceable trust of user group i.e. contractual or controlled trust, not gut feel Users are mapped to locations Interfaces are exposed to users via locations Example general groups: Anonymous unidentified or unauthenticated users External Users suppliers, contractors Internal Employees application users, administrators, call center Users
15. Methods of interacting with a system or asset They are things an attacker could compromise Exposes the value of asset Interfaces to the same system have a consistent value Value can be set to existing system criticality ratings Types Physical Console Access, Hardware Network Remote Desktop, SSH, NTP Functional (new) Represent access to data & functionality within an authorisation role Administrative Access, Approve Transaction Interfaces
16. Users are present at certain locations Many to Many mapping Both are a representation of controls “Company founder in the mission impossible room” vs.. “Unknown Outsider on the Internet” Location type mappings Physical – users who can be physically present Network – users who can access the network Logical – users who have been granted, or have authorisation MappingUsers to Locations
17. Interfaces are present at certain locations Many to Many mapping Constraints Physical interfaces only mapped to physical locations Physical Server in Data Centre A Technical interfaces only mapped to network locations Remote Desktop in Internal Network Functionality interfaces only to functional locations Execute Trade in Broken Role MappingInterfaces to Locations
18. Attacks An attack in performed on an interface to expose some of its value Likelihood is based on factors specific to the attack Excludes trust of users, or controls in place General likelihood defined per attack, but made specific when mapped Popularity, easy of discovery/exploitation, prevalence (DREAD) Initial work into using external attack metrics VERIS – best mapping, sometimes non-discreet CWE – too detailed, vulns specific, no “abuse of privilege” STRIDE – not specific enough Impact is the worst case scenario Defines how much of interface value would be affected (damage) Originally named “risks”
19. Attacks are performed against interfaces Many to one mapping Likelihood & Impact made specific per mapping System CIA should be considered e.g. theft of e-mail may be more damaging to the CEO than the gardener “Could this attack lead to a full compromise of the system?” Examples Physical theft of the Physical Server Password Bruteforce against Outlook Web Access Web Front-End Abuse of Privilege of the Administrator Role MappingAttacks to Interfaces
20. Validate permutations of threat vector combinations Can be any type of test that provides more information Technical test, research, policy work Different tests provide a different level of certainty Proved Disproved Can be granularly mapped Against a specific entity or combination of entities Tests
22. Data Gathering Collect as much information about the environment as you can. Network diagrams, key system documentation, existing risk/criticality analysis, past audit reports Interview Ideally, find a tech generalist with a good overview, then get specific, large company’s knowledge is more distributed Look to validate statements across interviews Get multiple “views” on criticality Testing Light testing to validate claims e.g. basic network footprintingor application use Passive collection Look for problems that should come out in the TM e.g. if they have regular & damaging virus outbreaks and the TM disagrees … ModelingA How-To
23. System Template<Name> |<Description> AA Authentication Source Authorization Integration Source Destination Criticality Include overall rating, individual ratings & reasons Confidentiality Integrity Availability Possession Authenticity Utility Locations Physical Network Functional (controls) Interfaces Include number & locations Physical Network Functional (access) Users Include number & locations Admins Users Support
24. Identify unique entitiesfrom completed system templates and general documentation – reconcile if differences Identify shared infrastructure & create new systems for them Map users & interfaces to locations Linked systems An application's functional interfaces should be mapped to its infrastructure’s functional locations For integrated systems, map functional interfaces to locations depending on access (push vs.. pull vs.. full) Process
25. Keep everything equally specific Summarise when no meaningful security difference Don’t create interfaces per-system for shared infrastructure Don’t represent risks more than once e.g. Don’t map two interfaces to the same system to the same location unless they are different “paths” e.g. administrator access implies “normal” access Avoid catch-all systems such as “user’s computers” rather model them as interfaces to relevant areas 2-tier vs.. 3-tier applications have different access from the application to the DB & vice-versa Modeling Gotchas
29. Understand concepts in relation to each other Discrete Individually necessary Collectively sufficient risk = threat x vulnerability x impact Disclaimer: Σ – The International Sign for “Stop Reading Here” Risk Equation
30. The tool gives us the following inputs User Trust Location Trust (controls) Interface Value Attack Likelihood Attack Impact But, complete freedom in defining how they are mashed up Input Values
31. risk = threat x vulnerability x impact likelihood = threat x vulnerability risk = likelihood x impact Likelihood
32. risk = applied likelihood + value at risk applied likelihood = attack likelihood (reduced by) user trust + location trust value at risk = value of asset (reduced by) amount of asset exposed by attack Risk Equation Used
33. [6 minus] – Ratings are out of 5 & denote a positive trust value, we want the “distrust” value [multiply 0.2] – We want the trust & impact to moderate the likelihood & value [divide by 2] – We take an average of user & location trust (equally weighted) Risk Equation
35. Takes every permutation & provides analysis graphs & a risk curve Provides three things Risk Curve One view to rule them all Analysis Graphs Slice & Dice Detailed risk searching/pivot table Zoom Threat Model Dashboard
36. Challenge was to provide management view Single number loses too much context Frequency graph of number of “risks” per severity level Risk Curve
37. “Digital” version of risk curve, with ability to show risks per entity type Can view per “perspective” physical, technical, functional Can zoom into showing only risks relating to a specific system Can look at “pivot” risks, i.e. attacks available to someone once they have compromised a system Analysis Graphs
54. Tool re-write - aiming for cross platform enforcing certain design constraints e.g. physical <-> physical mappings only macro’ing time consuming tasks Adding population size Permutations favour specificity e.g. if you define multiple user groups for one application & not another, the first app has more risks Refining the risk equation Equal consideration of user & location trust may need refining Normalise across physical, network & functional “views” Refining modeling bounding as results are tested Future Work
Can we improve on qualitative risk assessments to provide a better view of how bag guys will attack you, but can we also extend the scope of what a pentest can achieve without extending the time. The CTM can also dictate a testing plan to make sure you test the “right” things, which would improve the accuracy but also the testing plan should allow a pentest to run faster as not everything is tested. The danger is that if you’ve made a bad assumption, the “find stuff we didn’t think of aspect” of the pentest could be lost.
Other tools didn’t give us what we need. Either didn’t give metrics we needed, required a view external consultants couldn’t easily get, took too long or just weren’t great.
The trust associated with a location is related to how much control we have over the actions performed by users in that organization. This can lead to some non-obvious scenarios at first glance. For example, administrative access has fewer controls by default than normal user access, and is hence less trusted. Specific technology to monitor and control admin access would increase this trust. The fact that administrators are more trusted would be represented on the user not the location. * Different applications part of the same system will get different logical *locations* and possibly different network interfaces, but not different functional interfaces (as locations represent control, interface represents a full compromise of a system) * Make sure all unauthenticated location have a high level of trust, since you can't do anything on them (we assume)
The trust afforded to a user should ideally be based on ability to monitor their actions, employee screening lengths, contractual remediations etc. For example, super users are generally considered more trusted, but quite frequently only because of the position and seniority they occupy, not for solid codified reasons.
Interface values must be consistent throughout, unless an interface exposes much less value than the entire system. For example, if a system is critical to the business, and if the web application to access it only exposes a subset of functionality, it would still be possible to compromise that interface to provide full value and the less functionality can be represented by the likelihood and impact under risk. Even an NTP interface (especially if it runs as SYSTEM or an administrative user and has a history of buffer overflows) should mirror the value of the system.
Removed reference to automated system locations
<Name> - <Description> Locations #Places interfaces and users exist Physical #Physical places such as offices or data centers Network #Network locations such as internal net or DMZ Functional #Authorization levels within a system e.g. Administrative, Authenticated & Unauthenticated access. This represents the controls in place for members of this role. Interfaces #Means of accessing a system Physical #Physical means such as hardware, or the console Network #Electronic communication means such as RDP or NTP Functional #Authorization levels within a system e.g. Administrative, Authenticated & Unauthenticated access. This represents the functionality & data members with this role would have access to. Users #Actors utilizing systems or relevant to the threat model Admins #Administrators of the system, keep it specific to the system not supporting infrastructure Users #Normal users of the system Support #Who supports the system/application, are they different from the administrators AA #How authentication & authorization are handled Authentication Source #Where are usernames & passwords stored, and what is checked when a user logs in Authorization #How is access within the app managed e.g. externally via AD groups, or internal to the App Integration #How does it connect to other systems. Focus on attack paths Source #How does it receive data Destination #Where does it send it to Criticality #How important to the business is this system Confidentiality #How critical is keeping the data secret Integrity #How critical is keeping the data accurate Availability #How critical is the uptime
Interlinking 1) e.g. an Oracle DBA may be able to access the administrative functionality of an app being hosted on it 2) Push, pull, full. If data is pushed from a system to another, then the source will have access to the destination and the destination’s functional interface will be exposed at the source’s functional location
2 vs. 3 tier: With 2-tier, if you access to the app, then you have DB access. 3-tier, access doesn’t imply DB access.
But, measuring threat and vulnerability is somewhat difficult, so we measure likelihood.
Latex to generate the equation: \\left(AttackLikelihood \\times \\frac{(6-UserTrust)+(6-LocationTrust)}{2}\\times 0.2 \\right) + \\left(InterfaceValue \\times Impact \\times 0.2\\right ) use at http://www.codecogs.com/latex/eqneditor.php