3. 3
A Bit About Me …
Background in Mechanical Engineering with concentration on Robotics
Designed, built, and developed robotic jet engine manufacturing systems for
Alcoa Power and Propulsion
Architected huge material handling systems (warehouse sortation, airport
baggage handling, shipping and packaging, etc.) for BEUMER Group
Close friend recommended trying business intelligence consulting
Worked as a Splunk PS consultant across many industries
Now focused on IT Security at CAA
4. 4
About Creative Artists Agency
Headquartered in Los Angeles, CA
10 locations across 6 countries
– Additional small/home offices
– 4,000 employees
– 6 security staff
Talent and Sports Agency
– Represent world’s leading artists,
entertainers, athletes, and brands
5. 5
What We’re Protecting
Internal Data
– Agent/Executive data
– Corporate information
– Financials
– Internally developed applications
Client Data
– Reputation
– Personal/Sensitive information
– Contracts
– Salary information
6. 6
Security Challenges We Face
“Target Rich” Environment
– VIPs = prime targets
– Many non-technical users
– High churn rate on assistants
Variety of Threats
– Leaked credentials
– Malicious insiders
– Phishing/spear phishing attacks
– Web borne threats
7. 7
Before Splunk
Situation
Manual Processes
– Data from a wide range of point products
– Email threat investigations begin with users
– Deeper investigations require cross-checking logs
Impact
Very slow and reactive
Limited ability to do any kind of trend analysis
No end-to-end picture
Can’t scale to meet growing needs
8. 8
Example: Phishing Investigations
If user questions the validity of an email:
Sends email to “Is This Safe” mailbox
1. Homegrown tool checks URLs against AV for known malware
2. Generates report
3. Response sent back to user indicating safe/not safe
Security team reviews all emails sent to “Is This Safe”:
Deeper investigation
1. Manually investigate emails
2. If phishing, check email logs for sender/IP/recipients
3. Check web security appliance to see who clicked on URLs
4. Reach out to users to resolve
9. 9
Security Requirements
• Objectives
– Eliminate manual processes
– Correlate disparate data sources to
maximize context in security
investigations
• Goal
Collect and analyze all security relevant
data to streamline incident investigations
and improve incident response times
Need a Single Pane of Glass for All Security Data!
10. 10
How We Use SplunkPrimary incident investigation tool – single pane of glass
– Correlate and view data from disparate security point products
Firewall
IPS
Cloud service event logs
Email security appliance
Web security appliance
External threat feeds
Proactive Security Monitoring
– Failed / Successful logins
– Data leakage
– Known high-risk IPs
– Antivirus threat detections
Regular Security Auditing
– User provisioning
– Password changes
– New device logins
– HR changes
– Security group changes
– New cloud instance creation
Behavioral Analytics
– Z score analysis
Operational Intelligence
11. 11
Incident: Splunk Saves the Day!
Decommissioned domain controller in Geneva crushed DHCP
– Wireless and unassigned devices lost network connection
– Splunk clearly showed the pattern
Releasing/renewing
Authorization failure
– Resolved within 45 minutes!
Built a custom alert – happened again a few weeks later
– Immediately alerted and resolved
13. 13
Incident: Security Risk Alert
Daily Splunk security auditing - monitoring user logins by device ID
Detected user had 2 new iPhone logins on same day
– Potential risk – stolen credentials used
– Investigated user activity in Splunk to gain context
– Reached out to user
Determined user had purchased new iPhone, broke phone, and
purchased a replacement within the same day
14. 14
Future Plans
Continue to improve security visibility and controls
– Expand advanced proactive analytics (behavioral modeling, etc.)
Become an internal evangelist for Splunk
– Branch out from security to help other groups solve their challenges
IT operations
Product development
Financial analytics
IT
Operations
Application
Delivery
Developer Platform (REST API, SDKs)
Business
Analytics
Industrial Data
and Internet of
Things
Business
Analytics
Industrial
Data and
Internet of
Things
Security,
Compliance,
and Fraud
15. 15
My Advice
Invest the time to clean data and add context first
– Log types
– Data sources
– User identity
– Devices
– Locations
– Etc.
Context makes data easily reusable and accelerates analysis
– Correlations
– Alerts
– Solving a wide range of problems
Aside from the above:
Manage team of excellent security professionals
Security is not just a security department responsibility – incredible support and partnership with other IT groups
Excellent partnership with all departments and executive support
Derek: Moody's is a credit rating agency. It provides credit opinions and ratings to the market in order for the market to evaluate risk around bonds, lending, et cetera. Moody's is split into a couple of different divisions. One of them is MIS, which is the credit rating agency, which services that function.
We also have a fairly sizeable software development business called Moody's Analytics that focuses on building software, that banks and other organizations can use to evaluate their own credit risk, and some of the investments they make, and overall financial tools for helping them better understand risk exposure around the market investments, credit, and lending. We can probably get you a more canned corporate communications statement. That might be helpful for you to take a look at, as well.
Enter Splunk: We were able to pull data from throughout the organization, including end user systems, security appliances, and email and web servers – correlating and analyzing together for detailed forensics and streamlined incident response.
Different security concerns in restaurant space vs. other verticals
Yeah, the way we got started is we had a number of different homegrown log aggregation processes that were in place that were fairly absent of any kind of UI or analytics capability. It was typical log collection onto a central server using command line tools to do some analysis, et cetera.
We also had some managed service providers that were giving us some very, very basic analytics by also aggregating some of our log information into some of their tools. It wasn't really delivering the kind of service and capability we were looking for. It was very slow, very reactive, not a lot of ability to do any kind of trend analysis.
We went down a path to evaluate where do we want to be from a log collection and analytics standpoint. Obviously, we went down the path of looking at a number of the SIEM tools available in the market and give an evaluation of the typical players like QRadar, ArcSight.
We really found that while a lot of them had a good SIEM profile, they weren't really designed to be log archive tools. In order to use them as a log archives tool you had to invest a tremendous amount of overhead in storage, processing power, et cetera. Once you try to use those platforms as aggregators for any kind of real historic data, they just went to a crawl from a usability perspective.
We were much more interested in doing deep, historical forensic analysis, and analytics than we were in having a real‑time dashboard of things that were going on because most of that work we view as we want to outsource that to somebody who can staff an eyes on glass capability in a much more 24/7 way.
We want the internal platform to really be about how do we go back to six months ago and understand what happened from a security forensic stand‑point or how do we do trend analytics on potential events or that type of activity. We quickly eliminated some of the tools that were much more focused on what I'll call security operations type users and started to look for tools that were much more of a log aggregation type platform.
We looked at a couple of different options there, and that how we ended up with Splunk. Really, one of the deciding factors was we wanted something that would scale to be able to collect data, not just security data, but data from the whole organization, so that we weren't buying one platform for security and then buying another platform for normal IT operations because the view was, if we don't comingle all the data together, the value of that analysis is reduced.
Yeah, the way we got started is we had a number of different homegrown log aggregation processes that were in place that were fairly absent of any kind of UI or analytics capability. It was typical log collection onto a central server using command line tools to do some analysis, et cetera.
We also had some managed service providers that were giving us some very, very basic analytics by also aggregating some of our log information into some of their tools. It wasn't really delivering the kind of service and capability we were looking for. It was very slow, very reactive, not a lot of ability to do any kind of trend analysis.
We went down a path to evaluate where do we want to be from a log collection and analytics standpoint. Obviously, we went down the path of looking at a number of the SIEM tools available in the market and give an evaluation of the typical players like QRadar, ArcSight.
We really found that while a lot of them had a good SIEM profile, they weren't really designed to be log archive tools. In order to use them as a log archives tool you had to invest a tremendous amount of overhead in storage, processing power, et cetera. Once you try to use those platforms as aggregators for any kind of real historic data, they just went to a crawl from a usability perspective.
We were much more interested in doing deep, historical forensic analysis, and analytics than we were in having a real‑time dashboard of things that were going on because most of that work we view as we want to outsource that to somebody who can staff an eyes on glass capability in a much more 24/7 way.
We want the internal platform to really be about how do we go back to six months ago and understand what happened from a security forensic stand‑point or how do we do trend analytics on potential events or that type of activity. We quickly eliminated some of the tools that were much more focused on what I'll call security operations type users and started to look for tools that were much more of a log aggregation type platform.
We looked at a couple of different options there, and that how we ended up with Splunk. Really, one of the deciding factors was we wanted something that would scale to be able to collect data, not just security data, but data from the whole organization, so that we weren't buying one platform for security and then buying another platform for normal IT operations because the view was, if we don't comingle all the data together, the value of that analysis is reduced.
Enter Splunk: We were able to pull data from throughout the organization, including end user systems, security appliances, and email and web servers – correlating and analyzing together for detailed forensics and streamlined incident response.