SlideShare ist ein Scribd-Unternehmen logo
1 von 17
10 Steps to Achieve
Risk-Based Security
Management
DANIEL BLANDER, TECHTONICA
CINDY VALLADARES, PRODUCT MARKETING AT TRIPWIRE
Techtonica
•
•
•
•

•
•
•
Risk = Probability (x) Impact

•
•
•
•
•

•
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
•
•

•
•
•

•
•
•
•
•

•
•
•

•
•
•
•

•

•
•
•

•
•
•

•
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
www.tripwire.com/blog

Daniel Blander
@djbphaedrus
www.tripwire.com

Cindy Valladares
@cindyv

16
Ten Steps to Improve Enterprise Security Strategies

Weitere ähnliche Inhalte

Mehr von Tripwire

Data Privacy Day 2022: Tips to Ensure Data Privacy
Data Privacy Day 2022: Tips to Ensure Data PrivacyData Privacy Day 2022: Tips to Ensure Data Privacy
Data Privacy Day 2022: Tips to Ensure Data PrivacyTripwire
 
Key Challenges Facing IT/OT: Hear From The Experts
Key Challenges Facing IT/OT: Hear From The ExpertsKey Challenges Facing IT/OT: Hear From The Experts
Key Challenges Facing IT/OT: Hear From The ExpertsTripwire
 
Tripwire Energy Working Group: TIV Demo
Tripwire Energy Working Group: TIV Demo Tripwire Energy Working Group: TIV Demo
Tripwire Energy Working Group: TIV Demo Tripwire
 
Tripwire Energy Working Group Session w/Dale Peterson
Tripwire Energy Working Group Session w/Dale PetersonTripwire Energy Working Group Session w/Dale Peterson
Tripwire Energy Working Group Session w/Dale PetersonTripwire
 
Tripwire Energy Working Group: CIP Solutions and Baseline Walk-Through
Tripwire Energy Working Group: CIP Solutions and Baseline Walk-Through Tripwire Energy Working Group: CIP Solutions and Baseline Walk-Through
Tripwire Energy Working Group: CIP Solutions and Baseline Walk-Through Tripwire
 
Tripwire Energy Working Group: Customer Session with Chase Cole
Tripwire Energy Working Group: Customer Session with Chase ColeTripwire Energy Working Group: Customer Session with Chase Cole
Tripwire Energy Working Group: Customer Session with Chase ColeTripwire
 
Tripwire Energy Working Group: Keynote w/Patrick Miller
Tripwire Energy Working Group: Keynote w/Patrick Miller Tripwire Energy Working Group: Keynote w/Patrick Miller
Tripwire Energy Working Group: Keynote w/Patrick Miller Tripwire
 
World Book Day: Cybersecurity’s Quietest Celebration
World Book Day: Cybersecurity’s Quietest CelebrationWorld Book Day: Cybersecurity’s Quietest Celebration
World Book Day: Cybersecurity’s Quietest CelebrationTripwire
 
Tripwire Retail Security 2020 Survey: Key Findings
Tripwire Retail Security 2020 Survey: Key FindingsTripwire Retail Security 2020 Survey: Key Findings
Tripwire Retail Security 2020 Survey: Key FindingsTripwire
 
Key Findings: Tripwire COVID-19 Cybersecurity Impact Report
Key Findings: Tripwire COVID-19 Cybersecurity Impact ReportKey Findings: Tripwire COVID-19 Cybersecurity Impact Report
Key Findings: Tripwire COVID-19 Cybersecurity Impact ReportTripwire
 
The Adventures of Captain Tripwire: Coloring Book!
The Adventures of Captain Tripwire: Coloring Book!The Adventures of Captain Tripwire: Coloring Book!
The Adventures of Captain Tripwire: Coloring Book!Tripwire
 
Industrial Cybersecurity: Practical Tips for IT & OT Collaboration
Industrial Cybersecurity: Practical Tips for IT & OT CollaborationIndustrial Cybersecurity: Practical Tips for IT & OT Collaboration
Industrial Cybersecurity: Practical Tips for IT & OT CollaborationTripwire
 
The Adventures of Captain Tripwire #1: Captain Tripwire Faces the Indefensibl...
The Adventures of Captain Tripwire #1: Captain Tripwire Faces the Indefensibl...The Adventures of Captain Tripwire #1: Captain Tripwire Faces the Indefensibl...
The Adventures of Captain Tripwire #1: Captain Tripwire Faces the Indefensibl...Tripwire
 
Tripwire 2019 Skills Gap Survey: Key Findings
Tripwire 2019 Skills Gap Survey: Key FindingsTripwire 2019 Skills Gap Survey: Key Findings
Tripwire 2019 Skills Gap Survey: Key FindingsTripwire
 
A Look Back at 2018: The Most Memorable Cyber Moments
A Look Back at 2018: The Most Memorable Cyber MomentsA Look Back at 2018: The Most Memorable Cyber Moments
A Look Back at 2018: The Most Memorable Cyber MomentsTripwire
 
Time for Your Compliance Check-Up: How Mercy Health Uses Tripwire to Pass Audits
Time for Your Compliance Check-Up: How Mercy Health Uses Tripwire to Pass AuditsTime for Your Compliance Check-Up: How Mercy Health Uses Tripwire to Pass Audits
Time for Your Compliance Check-Up: How Mercy Health Uses Tripwire to Pass AuditsTripwire
 
Tripwire State of Cyber Hygiene 2018 Report: Key Findings
Tripwire State of Cyber Hygiene 2018 Report: Key FindingsTripwire State of Cyber Hygiene 2018 Report: Key Findings
Tripwire State of Cyber Hygiene 2018 Report: Key FindingsTripwire
 
Defend Your Data Now with the MITRE ATT&CK Framework
Defend Your Data Now with the MITRE ATT&CK FrameworkDefend Your Data Now with the MITRE ATT&CK Framework
Defend Your Data Now with the MITRE ATT&CK FrameworkTripwire
 
Defending Critical Infrastructure Against Cyber Attacks
Defending Critical Infrastructure Against Cyber AttacksDefending Critical Infrastructure Against Cyber Attacks
Defending Critical Infrastructure Against Cyber AttacksTripwire
 
Jumpstarting Your Cyberdefense Machine with the CIS Controls V7
 Jumpstarting Your Cyberdefense Machine with the CIS Controls V7 Jumpstarting Your Cyberdefense Machine with the CIS Controls V7
Jumpstarting Your Cyberdefense Machine with the CIS Controls V7Tripwire
 

Mehr von Tripwire (20)

Data Privacy Day 2022: Tips to Ensure Data Privacy
Data Privacy Day 2022: Tips to Ensure Data PrivacyData Privacy Day 2022: Tips to Ensure Data Privacy
Data Privacy Day 2022: Tips to Ensure Data Privacy
 
Key Challenges Facing IT/OT: Hear From The Experts
Key Challenges Facing IT/OT: Hear From The ExpertsKey Challenges Facing IT/OT: Hear From The Experts
Key Challenges Facing IT/OT: Hear From The Experts
 
Tripwire Energy Working Group: TIV Demo
Tripwire Energy Working Group: TIV Demo Tripwire Energy Working Group: TIV Demo
Tripwire Energy Working Group: TIV Demo
 
Tripwire Energy Working Group Session w/Dale Peterson
Tripwire Energy Working Group Session w/Dale PetersonTripwire Energy Working Group Session w/Dale Peterson
Tripwire Energy Working Group Session w/Dale Peterson
 
Tripwire Energy Working Group: CIP Solutions and Baseline Walk-Through
Tripwire Energy Working Group: CIP Solutions and Baseline Walk-Through Tripwire Energy Working Group: CIP Solutions and Baseline Walk-Through
Tripwire Energy Working Group: CIP Solutions and Baseline Walk-Through
 
Tripwire Energy Working Group: Customer Session with Chase Cole
Tripwire Energy Working Group: Customer Session with Chase ColeTripwire Energy Working Group: Customer Session with Chase Cole
Tripwire Energy Working Group: Customer Session with Chase Cole
 
Tripwire Energy Working Group: Keynote w/Patrick Miller
Tripwire Energy Working Group: Keynote w/Patrick Miller Tripwire Energy Working Group: Keynote w/Patrick Miller
Tripwire Energy Working Group: Keynote w/Patrick Miller
 
World Book Day: Cybersecurity’s Quietest Celebration
World Book Day: Cybersecurity’s Quietest CelebrationWorld Book Day: Cybersecurity’s Quietest Celebration
World Book Day: Cybersecurity’s Quietest Celebration
 
Tripwire Retail Security 2020 Survey: Key Findings
Tripwire Retail Security 2020 Survey: Key FindingsTripwire Retail Security 2020 Survey: Key Findings
Tripwire Retail Security 2020 Survey: Key Findings
 
Key Findings: Tripwire COVID-19 Cybersecurity Impact Report
Key Findings: Tripwire COVID-19 Cybersecurity Impact ReportKey Findings: Tripwire COVID-19 Cybersecurity Impact Report
Key Findings: Tripwire COVID-19 Cybersecurity Impact Report
 
The Adventures of Captain Tripwire: Coloring Book!
The Adventures of Captain Tripwire: Coloring Book!The Adventures of Captain Tripwire: Coloring Book!
The Adventures of Captain Tripwire: Coloring Book!
 
Industrial Cybersecurity: Practical Tips for IT & OT Collaboration
Industrial Cybersecurity: Practical Tips for IT & OT CollaborationIndustrial Cybersecurity: Practical Tips for IT & OT Collaboration
Industrial Cybersecurity: Practical Tips for IT & OT Collaboration
 
The Adventures of Captain Tripwire #1: Captain Tripwire Faces the Indefensibl...
The Adventures of Captain Tripwire #1: Captain Tripwire Faces the Indefensibl...The Adventures of Captain Tripwire #1: Captain Tripwire Faces the Indefensibl...
The Adventures of Captain Tripwire #1: Captain Tripwire Faces the Indefensibl...
 
Tripwire 2019 Skills Gap Survey: Key Findings
Tripwire 2019 Skills Gap Survey: Key FindingsTripwire 2019 Skills Gap Survey: Key Findings
Tripwire 2019 Skills Gap Survey: Key Findings
 
A Look Back at 2018: The Most Memorable Cyber Moments
A Look Back at 2018: The Most Memorable Cyber MomentsA Look Back at 2018: The Most Memorable Cyber Moments
A Look Back at 2018: The Most Memorable Cyber Moments
 
Time for Your Compliance Check-Up: How Mercy Health Uses Tripwire to Pass Audits
Time for Your Compliance Check-Up: How Mercy Health Uses Tripwire to Pass AuditsTime for Your Compliance Check-Up: How Mercy Health Uses Tripwire to Pass Audits
Time for Your Compliance Check-Up: How Mercy Health Uses Tripwire to Pass Audits
 
Tripwire State of Cyber Hygiene 2018 Report: Key Findings
Tripwire State of Cyber Hygiene 2018 Report: Key FindingsTripwire State of Cyber Hygiene 2018 Report: Key Findings
Tripwire State of Cyber Hygiene 2018 Report: Key Findings
 
Defend Your Data Now with the MITRE ATT&CK Framework
Defend Your Data Now with the MITRE ATT&CK FrameworkDefend Your Data Now with the MITRE ATT&CK Framework
Defend Your Data Now with the MITRE ATT&CK Framework
 
Defending Critical Infrastructure Against Cyber Attacks
Defending Critical Infrastructure Against Cyber AttacksDefending Critical Infrastructure Against Cyber Attacks
Defending Critical Infrastructure Against Cyber Attacks
 
Jumpstarting Your Cyberdefense Machine with the CIS Controls V7
 Jumpstarting Your Cyberdefense Machine with the CIS Controls V7 Jumpstarting Your Cyberdefense Machine with the CIS Controls V7
Jumpstarting Your Cyberdefense Machine with the CIS Controls V7
 

Kürzlich hochgeladen

"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsMiki Katsuragi
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):comworks
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfRankYa
 
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DayH2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DaySri Ambati
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionDilum Bandara
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii SoldatenkoFwdays
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek SchlawackFwdays
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningLars Bell
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Enterprise Knowledge
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsPixlogix Infotech
 

Kürzlich hochgeladen (20)

E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering Tips
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdf
 
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DayH2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An Introduction
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine Tuning
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and Cons
 

Ten Steps to Improve Enterprise Security Strategies

Hinweis der Redaktion

  1. Huge growth opportunity with our market, with the problems we help our customers solve and have a long, sustained history of success.Headquartered in Portland, OROpen source legacy since 1980’s (1988); First code released 1994Founded in 1997Tripwire Enterprise released 2004Tripwire Log Center acquired/released 2009Acquired by Thoma Bravo in 2011302 employees worldwide (as of 2/1/12) 6,092 customers worldwide (as of 1/1/12)46% of Fortune 50027 of 30 US Federal Government agencies96 countriesWorld-class customer support – 96% customer satisfaction; 86% of customers would recommend us to a colleague/friend2011 Winner for Best Enterprise Security Solution (SC Magazine)7 US patents granted; 15 US patents in process 8 consecutive years of revenue growth, bookings growth and profitability
  2. let’s begin by talking about some of the changes that are facing us in the world of information security today. As I speak with customers around the world, a number of trends are emerging. For example many information security professionals executives included now have to rip appeals to other parts of the business to get funding. This often includes speaking with non-technical audiences. For example, I was working with a group of hospitals recently, and they have to appeal to hospital boards for project funding, IT investments, and other staffing needs for example. In these cases, it can be very difficult for the non-technical executives to really understand why the security executives are asking for more money. This presents its own challenges, which I’ll discuss a little more in detail later.I’ve also spoken with a number of enterprises, who are trying to be more proactive in security. Essentially they want to move things from simply focusing on alerting, to moving into providing useful information actually enables strategic decisions in other words, a decision centerAnother dynamic I’ve observed is that compliance is really beginning to drive conversations around risk management area I believe, this is a result of audits focus on top-down, risk-based compliance This translates into a focus that brings risk more into the picture as discussions occur around information security.Another aspect of this is the need by executive management, to more effectively allocate budgets based on objective measures Since many of these executives are financial professionals, they are accustomed to balancing risk versus rewardFinally, many of the higher profile information security events and breaches are more visible than ever to non-technical executives and our environment. This is due to something I called the iPad effect. How many of you have executives in your company who read the Wall Street Journal or some other newspaper on the iPad, then send around lots of links to stories that relate to information security? The good news is, this provides a prime opportunity for us to engage with them around the importance of what we do every day.When you put all of this together, I hope you'll understand some of the reasons we undertook this study of the state of risk management and specifically risk-based security management, and hopefully you'll pick up a few pointers that will help you getting your organization to embrace risk and as a key part of security.
  3. PCI Req 12.1.2Establish, publish, maintain, and disseminate a security policy that Includes an annual process that identifies threats, and vulnerabilities, and results in a formal risk assessmentPCI DSS Req. 6.2 Establish a process to identify and assign a risk ranking to newly discovered security vulnerabilities.Notes: Risk rankings should be based on industry best practices. For example, criteria for ranking “High” risk vulnerabilities may include a CVSS base score of 4.0 or above, and/or a vendor-supplied patch classified by the vendor as “critical,” and/or a vulnerability affecting a critical system component. The ranking of vulnerabilities as defined in 6.2.a is considered a best practice until June 30, 2012, after which it becomes a requirement.IT Grundschutzhttp://rm-inv.enisa.europa.eu/methods_tools/m_it_grundschutz.html
  4. Risk based security management is a method of choosing where to focus scarce resources and money through a systematic approach.It involves identifying and analyzing threats, identifying the probability of the threats occurring, and the impact if the threats are realized.It is important to note that the key element to this equation is “Probability”. Many confuse this with “possibility”. Many things are possible – including a meteor striking the earth, an earthquake in New York City, and a squirrel eating through a power line. Each of these are threats and have possibilities of happening. But the important question that risk based analysis raises is: Which of these are more probable, or to say it another way, more likely to happen? This does not mean that we dismiss the things we find “possible” but rather that we should first focus on the things that are highly probable. Too often I have found security professionals distracted by the latest buzzword, hype or research discovery. And as many of the breach reports have borne out, too many of the breaches happening are the result of things that are far more simplistic and probable.A few definitions I want to point out:Threats are anything that can have a negative impact on things that we value.Vulnerabilities are situations that can be exploited by a threat.Impact is the result when a threat applies itself to a vulnerability.Risk based security management has a foundation in a wider Enterprise Risk Management system that is focused on enabling decision makers – providing them with the information they need to plan run the company and plan new initiatives in a way that helps maximize their probability of success, or at least to minimize any negative impacts.
  5. We believe that there are some key elements that make up the framework of risk based security management. I’ve outlined them in the ten steps in the paper, but there are some inherent benefits of this approach – ones that may not seem apparent if you move mechanically through the steps.First, and most obviously, decisions can be based on the clear identification, analysis and prioritization of risks. Risk analysis should be based on observable facts, and whenever possible on measurable data. It is not always possible to gather exact measurable data, nor is it possible to always make risk analysis a quantitative process. But using facts based on observation and relevant data is important in that it roots the decisions in reality. Attempting to make decisions without a point of reference based in reality is a game of pin the tail on the donkey. We end up following long-held beliefs that are based on old or anecdotal observations, and in some cases hearsay. When you root your decisions on observable facts and data, you will find that many of these a priori beliefs will be challenged. [Jay Jacobs study on his honeypot data.]There is another side affect of risk analysis – decisions become more explicit and open to examination. By having facts and data to discuss, the discussions can be opened to discourse and offer a step towards objectifying the decision making process – versus it being a belief system which is often more difficult to open to discourse. One of my longest running research projects has been around single- and double-loop learning and how opening decision making to examination and testing creates situations where decisions are more thoroughly tested and refined. In addition, the participants in the process believe in the decision and are also far more likely to participate in improving and correcting it when challenges occur. (Chris Argyris & Donald Schon)In the end, basing your risk management program on a risk analysis process allows you and your management to make decisions based on factual data, and can focus your resources, efforts and worries in areas that produce the greatest benefits.“The essence of risk management lies in maximizing the areas where we have some control over the outcome, while minimizing the areas where we have absolutely no control over the outcomes and the linkage between effect and cause is hidden from us.” P. Bernsten – Against the Gods
  6. Step 1:The first step is identifying what matters to the organization and the decision makers. We often think of them as assets – but take care as the way we often use this term in our industry constrains us. Assets are items of value – but keep in mind that this includes items that are intangible.An executive such as a CEO or CFO is more likely focused on impact to revenue stream and the elements that directly affect them than any particular hard asset, computer or single set of data. Elements such as customer satisfaction and retention, protection of intellectual property (inventions, proprietary knowledge) are more likely on his radar. Understanding his priorities, business objectives, and motivations are cßritical to framing your analysis.I usually start with the stakeholders for the area being examined. In one case, when I found out that the security department had a limited view of the business, we spent two weeks interviewing every VP and Director in the company about what their department did, how it played into the priorities of the company, and what things kept them up at night. We did this under the thin guise of a “Business Impact Assessment” (BIA – for those of you not familiar with a BIA, in short, it is a method used in Business Continuity Planning to identify key business processes and their importance in relation to the overall business goals). We identified the overall priorities and goals of the business, the key assets (revenue through customer sales) and were able to create this simple but beautiful diagram of the business on a whiteboard that we kept in my office. We had max, min, and median revenue for the company. It also included measures such as key processes for each group, things that they worried about, the group’s impact on revenue and the amount of time before a group’s operational outage impacted revenue. This allowed us to think about the impact that various systems and data had on their operations, and how the loss or damage to them could impact them.[The same will apply to any analysis, no matter how larger or small – the key is understanding the values, motivations and objectives of the stakeholders.]Now we could base our risk analysis in the context of business’s own goals and objectives. If you do not know what these are, you are not going to have the context to provide them meaningful insight into the factors affecting their decision.Step 2:Collecting data on what matters is probably the most involved yet exciting part of the process. It is also one of the most important steps that is often bypassed or shortcut.Collecting data is a process of collecting observable data. It can include data about revenue or the impact to the revenue stream if something fails as I discussed in the first step. It can include data from studies based on empirical data such as data breach reports, or direct observations collected from honeypots. The objective here is to collect empirical data – not opinions, but data that verifies or invalidates previously held opinions.I discussed an example of collecting data for identifying assets and impact in the previous step. For the step of identifying frequency, likelihood, and probability I have used a myriad of tools. Recently I used the available data breach reports from several organizations to estimate probabilities of certain types of attacks. I have also used data from honeypots that are located in a company’s network to determine the frequency certain types of attacks are seen on the network. In one case I used the measures from a DLP system to show the frequency of the tool’s ability to identify incidents, and the actual frequency of incidents. Every tool in your environment can be used to collect data that can feed a risk analysis.However not all the data you want will be easy to gather, and some will resist quantifying. This often stymies those who expect perfection and exactness, and likewise is the ammunition that some use to attempt to dismiss risk analysis. Lets be clear. Risk analysis is not a game of precise prediction. It is a practice of identifying both probabilities of things happening, and impact of what happens when they occur. The data we collect allows us to make these two analyses more accurately by removing the improbable and being more accurate in the impact. Do not confuse that with precision. Precision is best achieved in hindsight. Accuracy is knowing where you should be able to aim. The data and measures you are going to collect here will be a guide of where you can expect you should aim your efforts.Care should be taken not to try to measure the seemingly unmeasurable, but to identify what can be measured to reduce your uncertainty. For example, attempting to measure the number of threat actors can seem problematic, but identifying simple things that you can measure from such an environment can prove highly effective. Imagine categorizing threat actors by their capabilities (sponsored skilled, unsponsored skilled, semi-skilled, script-kiddies, unskilled), or even by where they reside (outside your environment, inside your environment) These are all measures that can be used in analysis and are very valid.
  7. Step #3:This is where the various elements that affect risk are brought together and exercised. You need to perform a risk analysis and make it meaningful for your audience.There are multiple methodologies that are available that are “risk analysis” methodologies. Take care however which one you choose. There is no perfect methodology, but there are some key criteria that we feel reflect the usefulness of a methodology:It should require that you define the objectives of the assessment (which should meet the needs of the decision makers)It uses observable and tangible dataIt focuses on accuracy, not precision (ranges, not precision)It focuses on identifying probability and impact.It uses measures that are normalized and equally applicable at any scale (which means they can be reused in another analysis) As an example, an ordinal scale of “High, Medium, Low” or “5,4,3,2,1” has little meaning if they are not tied to a specific tangible meaning that reflects an agreed upon set of values that the stakeholders share and find useful in decision making.Again, the methodology can be either quantitative or qualitative. Both have their strengths – a qualitative can be very useful when quantitative measures are either difficult to collect or unavailable at the time of analysis. Likewise quantitative measures can be highly beneficial when data can be collected and greater precision is needed. Let me give an example of a risk assessment that is meaningful, based on observable data (where available), identifies probabilities and uses re-usable measures. And more importantly is a bridge between qualitative and quantitative….With a client I recently examined a vulnerability we had identified in their environment. Our needs were to understand the urgency of this vulnerability versus our day-to-day vulnerability remediation efforts. We had little data on it except measures provided by our vulnerability scanning vendor. We focused on this vulnerability because it had become a point of contention with management and engineers at odds over the risk associated with it. So I suggested we do a quick risk analysis and give them a response in an hour.In doing this analysis, I asked five questions of the security team:What [assets] are related to the affected systems? (for example email, payment card data, PII, intellectual property…) The answer was email. I asked what the email was used for – so that I understood the real value, the processes behind it. With a quick question to a nearby VP, we understood that it could be used for M&A activity, executive communication, and several compliance related activities. Good information on asset, and some insight into impact. This was qualitative information – we could not define it by an exact revenue number or lost business opportunities in the 30 minutes we wanted to spend on this analysis, but it gave us a narrative about the assets that an executive could relate to quickly.What population of people would have access to directly exploit this vulnerability? We thought of who could exploit this vulnerability, who would want to exploit it, and what our controls could do to limit the number of actors. We knew that this information would be useful to competitors, acquisition targets and the like. However, based on a quick examination of the firewalls, logs from various systems, we could identify that the population that could reach this system was limited to internal employees and administrators via the network. To be fair, we also considered a rarified subset of external entities (hackers) who had already penetrated our network. This is obviously a smaller subset of threat actors than a system that was exposed to the Internet. We use this information when we next considered motivation and capability.What is the level of difficulty in exploiting this vulnerability?We looked at the scale given by CVSS for the exploitability of the vulnerability. It gave us a measure that indicated that it was fairly difficult to exploit. We looked for tools that contained working examples of exploits. We looked for publicly available tools, and inside Metasploit. We drew a blank on all counts. A quick reading of the MITRE listing for the vulnerability showed that as of that time it had not really emerged beyond theoretical. The level of difficulty was considered quite high.What is the frequency that this type of exploit has occurred elsewhere, and what have we seen in our organization? We considered using a mix of research here, but ultimately realized that given the lack of publically working exploit, it was unlikely we would find anything. We relied on looking for publicly available examples of this exploit being used in the wild. We found none. The frequency was considered quite low if at all.What controls are in place that would mitigate the ability of someone to exploit this vulnerability? We had tools such as a firewall blocking access to the system which reduced the population which could access it, and we had a few configuration items that might help, but not much else would help mitigate the vulnerability.I took all the data that was collected and turned the risk into a few sentences that read something like this:“A vulnerability has been identified that can be used to expose internal email communications. The value of this email is related to the value of keeping confidential any sensitive communication between company personnel (which could relate to competitive advantage, knowledge of confidential financials, M&A activities, HR communications, and corporate planning). There are no publicly known example of this particular compromise occurring, the controls in place limit the threat to primarily internal personnel, and a high level of competency is required which likely exceeds nearly everyone at the company.”If you are looking for numbers, there aren’t any. It’s a qualitative analysis. It uses a normalized set of scales:Threat population and their capabilities: General Public, Internet Based Attackers, Internal Personnel via Network, Internal Administrators….Assets: identification of known business processes which can be associated to an executives value of themLevel of Difficulty: Theoretical, High Sophistication and Coordination, Skilled Attacker, Exists in Sophisticated Tools, Common in several tools, any computer user can do it….These are all measurable. They are all lacking a hard number, but they carry significant and useful meaning in decision making. We made data available in a way that centered on the stakeholder’s objectives and their mindset. We stated the problem at hand (what is the objective of the analysis), the assets that were considered, key threats to those assets, and the identified likelihood of those threats being realized.What we also did is made as explicit as possible the assumptions that we made in the analysis. We made it clear that the exploit, while theoretical, could still be part of an underground exploit tool. We also made it explicit that we assumed email was used for sensitive communications, and that our perceptions were that the internal network had not already been compromised or overrun with outsiders (hackers and third-parties alike). We made it open to their questions, and that we were open to inquiry and adjustments based on their perception of these values and assessments. It allows us and the executives to explore alternative views, other possible scenarios or explanations of our findings. The result is that we could be open about our analysis, and reach a mutually agreed upon statement that we all felt comfortable about because we were all (including management) were intimate with it. If the risk assessment is based on relevant data then the discourse should be collaborative, highly interactive and very rewarding. The objective of this discussion is not to win with your analysis, but to develop an even more refined analysis – one that management has participated in. One where the assumptions and analysis are challenged and subject to testing, alternative ideas are considered and tested, and communication is open.An important part of this is to know that the assessment and analysis should not make decisions, but rather show information that can affect decisions. The decision on next steps of action are up to the stakeholders. Priorities vary as do tolerance for risk. So will the assumptions and attributions made when looking at your analysis.
  8. Step #5:We often jump straight to the solutions – and what we end up missing is identifying the objectives of the solution. We have been conditioned by vendors to believe that tools will solve our security issues, and that they have the answer. The reality is that every organization has different environments, requirements and priorities. Each organization’s risk assessment will be different, and so will the risks that need to be mitigated. Jumping to a solution without identifying objectives misses the all important combination of people, processes, and *then* technology that are necessary to create an effective solution.A favorite example of mine is several companies’ rush to implement DLP (or Data Loss Protection) technology. I have had clients rush to deploy DLP and implement immediate “blocking” functionality that stops inappropriately exfiltrated data. Many of their justifications for this technology was that they believed in a high likelihood that data was being exfilitrated. What was amusing was that many did not have observable data to justify it.So in steps me with the idea that we needed to validate the risk before running to an end solution.The perceived risk: data exfiltrationControl objective: identify and minimize the exfiltration of sensitive data (sensitive data was well defined, but I will not belabor the details around it)As you can see we might first say “ah, perfect for DLP!” You’ve listened to too many vendors. Most DLP products that I have worked with only examine well known protocols. HTTP, FTP, Email, and instant messaging clients. In my situation I asked the tough question: what scenarios are you most concerned about where data will be exfiltrated. Most of the clients start with “hackers sending data out of the company”, jump to “malicious insiders” and then get to “employees who do the wrong thing”. I then ask them how hackers would perform this activity….a few good security people know this answer – and it is typically not through the protocols examined by DLP. Then I asked about the next group (scenario likely a mix of DLP scanned protocols and non-DLP scanned protocols). My question now became, “based on your biggest risks, how effective will this tool be at even *detecting* data exfiltration from that type of scenario.The problem here came down to two mistakes: A lack of an explicit risk analysis that did not make the threats (scenarios) explicit, and the client loosing track of the objectives and immediately attaching to a solution.Ironically enough, in one client’s case we found that the network DLP detected a grand total of 7 incidents per month. 6 of which were external parties sending us their sensitive information, and on average on per month internal employees sending their own personal sensitive information for personal purposes. Risk averted was….well, you can figure that out.Identifying the objectives allows us to establish the aim or purpose of the controls we want to put in place – what needs to be achieved. This is important because this is the tie between the control and the risk. A control objective will identify the risk being addressed, and will identify ways that minimize an element of that risk – whether reducing threat landscape, frequency, or vulnerabilities.Whenever you examine your objectives you need to also need to include the asset owners and those who will be affected by the controls and be open to exploration. This will broaden the range of mitigation strategies, and build collaboration and buy-in to the solution. The rigor of open discourse, inquiry, making supporting data explicit, testing of inferences, encouraging alternatives and competing views, making assumptions and attributions explicit, and applying rigor to this process builds collaboration and refinement that unilateral action stifles.Do not ignore the objectives or the risk that they are designed to mitigate.Do not assume there is one perfect control. Be open to unique and unusuall ideas.The design process should be open, inclusive of everyone effected, and open to challenge. Be prepared to have your ideas and things you assume to be “facts” challenged frequently, and allow it to happen. Let your ideas be tested. Fire makes good steel better. Testing makes good ideas better.Most importantly, if inserted back into the risk analysis, does the control reduce the risk by an expected amount?
  9. Step #9 -- Monitor and MeasureNow it is important to monitor the control you have put in place. The goal is to validate that the control is satisfying the intended objectives. Measure the effectiveness of the control in relation to the original risks it is designed to mitigate. The measures must focus on clearly identifying changes in risks.I have frequently merged the work in step 2 with the monitoring and measuring I do here in step 9. While setting up monitoring might seem a good idea to identify the holes in the immature areas, it also has served well in the mature areas where controls are already in place. The DLP example I provided earlier is a great example – after we implemented rudimentary network DLP, we were able to collect accurate data on the frequency of what the control (tool) could detect and would act on. In the case I listed, the frequency was so low that the value of the tool was minimal. It did not tell us if data was being exfiltrated, so we still had questions about the level of risk, but in terms of the control we had very clear measures of the effectiveness.I have never assumed that the data collected was absolute, but that it would likely reflect relative changes in things like frequency. At one client we collected data from DLP, Web Application Firewalls, Firewalls, Anti-Virus, and our log system to identify current frequency levels and any changes. The trending and fluctuations were enlightening to show what our systems were experiencing. We watched as various improvements in software code led to slow trends away from certain application attacks.Other measures can also help you identify changes in your vulnerability. Measuring something such as changes in technical vulnerabilities are easy to measure and trend because there are a plethora of tools available to measure them. I have used these measures to demonstrate reduced risk as the likelihood of exploit was reduced due to the lower probability that threat actors had tools to exploit systems that were only vulnerable to recently discovered vulnerabilities. (Suffice to say, most compromises are achieved through old exploits, so reducing them reduces the largest number of possible break-ins). These measures can indicate changing levels in the technical vulnerability of your environment to a threat. So can measuring in-place controls such as configuration standards and change. Changes in these measures can begin to indicate possible vulnerability to threats to availability.Examining and measuring as many of the elements as possible used in the risk analysis can create an understanding of the effectiveness of the control at meeting the objectives, mitigating the intended risks, as well as validating the accuracy of the risk analysis. These measures can also help to create an understanding of the shifts in the environment and how they affect risk.Step #10 -- Operate a Feedback Loop (Adjust & Repeat)Risk based security management is cyclical and ongoing. The monitoring and measuring of the controls will provide indicators of how effectively the control is being operated, if it is reducing the vulnerabilities, reducing frequency or affecting the threat landscape. This knowledge allows for the re-examination of the risks, the control objectives, the controls, their implementation and their operation. It allows refinement, adjustment and re-examination.The data that is collected should be used to create a feedback loop to examine the risk analysis using the same model to determine if the new data affects the resulting risk. The examination should consider the multiple possibilities:Are there changes in the environment that can also affect the metrics?Are there changes in the threats as time changes? Does the nature of threat actors change, or do the threats adapt to controls put in place to thwart them?Is the control being operated as intended and or are the measures acting as indicators of control design and its operation?In the DLP example I gave earlier, we used the findings to adjust our plans going forward on the use of DLP. It was also used it as a baseline for future data exfiltration examination, realizing that a better approach was to identify examples of exfiltration before throwing a tool after a purely perceived threat. The alternative of scanning every protocol and every transmission was so unwieldy (with the tools at the time) that management became a bit more circumspect in their approach.The goal is to use the measures and observations to continuously adjust perceptions and approach. If the data collected indicates the effectiveness of the control in mitigating risk then it is valuable. The data can also affect the risk analysis, the control objectives, the control design and the operation of the controls.
  10. Now, let's talk about about the study itself. This is a broad-based study that was responded to by over 2000 individuals spanning 4 different countries. I mentioned I work for tripwire, but I want to stress that we commissioned an independent research organization, in this case the Ponemon Institute, to perform this objective study on our behalf. In other words, we didn't want to lead the witness, we wanted an accurate depiction of the state of risk-based security management in today's world.This is the 1st of what we hope will be an annual benchmark of the global state of risk-based security management.Not only do we want to learn about the condition of risk management, we want to derive some prescriptive guidance from these findings. Then, as we resurvey about the same topics in the future, we can determine whether things are getting better, worse, or staying the same.