Now that the industry is trying to formalize the concept of risk management into neat little compartments like standards (ISO 27005/31000), certifications (CRISC) and products (GRC) guess what? We're doing it wrong. Fundamentally wrong. This talk will discuss why all this current risk management stuff is goofy and what sort of alternatives we have that might help us understand our ability to protect, our tendancy towards failure, and how to match that up with what management will stomach.
3. Kuhnâs Protoscience
A stage in the development of a science
that is described by:
âą somewhat random fact gathering
(mainly of readily accessible data)
âą a âmorassâ of interesting, trivial,
irrelevant observations
âą A variety of theories (that are spawned
from what he calls philosophical
speculation) that provide little guidance
to data gathering
8. governance, without metrics &
models, is superstition
governance, with metrics &
models, describes capability to
manage risk
9. Why does what you
execute on and how
you execute matter?
10.
11.
12. governance, without metrics & models,
is superstition
governance, with metrics & models,
describes capability to manage risk
measurably good governance
practices (can/will) reduce risk
measurably good governance is
simply a description of capability to
manage risk
18. Problems with âtangibleâ
- complex systems, complexity
science
- usefulness outside of the very
specific
- measurements
- lots of belief statements
19. How Complex Systems Fail
(Being a Short Treatise on the Nature of Failure; How Failure
is Evaluated; How Failure is Attributed to Proximate Cause;
and the Resulting New Understanding of Patient Safety)
Richard I. Cook, MD
Cognitive technologies Laboratory
University of Chicago
http://www.ctlab.org/documents/How
%20Complex%20Systems
%20Fail.pdf
20. Catastrophe requires multiple failures
single point failures are not enough..
The array of defenses works. System operations are generally successful. Overt
catastrophic failure occurs when small, apparently innocuous failures join to create
opportunity for a systemic accident. Each of these small failures is necessary to cause
catastrophe but only the combination is sufficient to permit failure. Put another way, there are
many more failure opportunities than overt system accidents. Most initial failure trajectories
are blocked by designed system safety components. Trajectories that reach the operational
level are mostly blocked, usually by practitioners.
Complex systems contain changing mixtures of failures latent within them.
The complexity of these systems makes it impossible for them to run without multiple
flaws being present. Because these are individually insufficient to cause failure they are
regarded as minor factors during operations. Eradication of all latent failures is limited
primarily by economic cost but also because it is difficult before the fact to see how
such failures might contribute to an accident. The failures change constantly
because of changing technology, work organization, and efforts to eradicate failures.
21. Complex systems run in degraded mode.
Post-accident attribution accident to a âroot
causeâ is fundamentally wrong.
All practitioner actions are gambles.
Human expertise in complex systems is
constantly changing
Change introduces new forms of failure.
Views of âcauseâ limit the effectiveness of
defenses against future events.
22. Problems with ânotionalâ
- becomes difficult to extract wisdom - we
want a âGross Domestic Productâ
- unable to be defended
- pseudo-scientific
- lots of belief statements
27. Managing risk means aligning the
capabilities of the organization, and
the exposure of the organization
with the tolerance of the data
owners
- Jack Jones
28. evidence based medicine, meet information security
What is evidence-based risk
management?
a deconstructed, notional view of risk
29. Loss Landscape
Threat Landscape
risk
Asset Landscape
Controls Landscape
30. Loss Landscape
a balanced
scorecard?
Asset Landscape Threat Landscape
risk
Controls Landscape
31. Loss Landscape a balanced
scorecard?
capability
(destroys âgâ
introducing quality
management & mgmt.
Asset Landscape Threat Landscape science elements into
infosec)
risk exposure
change
âcomplianceâ
simply becomes a
factor of loss
landscape and/or
operating as a
Controls Landscape
control group for
comparative data
38. What is the Verizon Incident Sharing (VerIS)
Framework?
- A means to create metrics
from the incident narrative
- how Verizon creates measurements for the DBIR
- how *anyone* can create measurements from an incident
- http://securityblog.verizonbusiness.com/wp-content/uploads/
2010/03/VerIS_Framework_Beta_1.pdf
39. What makes up the VerIS framework?
- Demographics
- Incident Classification
- Event Modeling (a4)
- Discovery & Mitigation
- Impact Classification
- Impact Modeling
40. Cybertrust Security
demographics - company industry
- company size
- geographic location
- of business unit in incident
- size of security
department
41. Cybertrust Security
incident classification - agent
error
misuse
- what acts against us
malware
hacking environmental
external - asset
social
action physical - what the agent acts
against
internal agent
asset
confidentiality
possession - action
partner
availability - what the agent does to the
type attribute utility asset
function
authenticity
integrity - attribute
- the result of the agentâs
action against the asset
42. Cybertrust Security
incident classification
a4 event model
the series of events (a4) creates an âattack modelâ
1 > 2 > 3 > 4 >
5
43. Cybertrust Security
discovery & mitigation - incident timeline
- discovery method
evidence sources
+
-
- control capability
- corrective action
- most straightforward manner
in which the incident could be
prevented
- the cost of preventative
controls
44. Cybertrust Security
Impact classification - impact
categorization
- sources of Impact
$
(direct, indirect)
- similar to iso 27005/FAIR
- impact estimation
- distribution for
amount of impact
- impact
qualification
- relative impact
rating
52. Problems:
Data sharing, incidents, privacy
Failures vs. Successes
(where management capability helps)
Talking to the business owner
(might still need a âtangible approach here, but pseudo-actuarial data can help - we
still want a GDP)
53. Successes:
Bridge the gap
(IRM becomes tactically actionable based on threat/attack modeling)
(Capability measurements bridged to notional increase/decrease in risk)
(complex system problems addressed by showing multiple sources of causes)
Accurate, notional likelihood
Accurate tangible impact