The ELK stack (Elasticsearch-Logstash-Kibana) provides a cost effective alternative to commercial SIEMs for ingesting and managing OSSEC alert logs. This presentation will show you how to construct a low cost SIEM based on ELK that rivals the capabilties of commercials SIEMs.
2. • Software Architect for Trend Micro Data Analytics Group
• Blogger for Trend Micro Security Intelligence and Simply
Security
• Email: vichargrave@gmail.com
• Twitter: @vichargrave
• LinkedIn: www.linkedin.com/in/vichargrave
2
3. • Open Source SECurity
• Open Source Host-based Intrusion Detection System
• Founded by Daniel Cid
• Log analysis and file integrity monitoring for Windows,
Linux, Mac OS, Solaris and many *nix systems
• Agent – Server architecture
• http://www.ossec.net
3
8. • Open source, distributed, full text search engine
• Based on Apache Lucene
• Stores data as structured JSON documents
• Supports single system or multi-node clusters
• Easy to set up and scale – just add more nodes
• Provides a RESTful API
• Installs with RPM or DEB packages and is controlled
with a service script.
8
9. • Index – contains documents, ≅ table
• Document – contains fields, ≅ row
• Field – contains string, integer, JSON object, etc.
• Shard – smaller divisions of data that can be stored
across nodes
• Replica – copy of the primary shard
9
10. # default configuration file - /etc/elasticsearch/elasticsearch.yml
######################### Cluster #########################
# Cluster name identifies your cluster for auto-discovery
#
cluster.name: ossec-mgmt-cluster
########################## Node ###########################
# Node names are generated dynamically on startup, so you're relieved
# from configuring them manually. You can tie this node to a specific name:
#
node.name: "es-node-1" # e.g. Elasticsearch nodes numbered 1 – N
########################## Paths ##########################
# Path to directory where to store index data allocated for this node.
#
path.data: /data/0, /data/1
10
11. • Log aggregator and parser
• Supports transferring parsed data directly to
Elasticsearch
• Controlled by a configuration file that specifies input,
filtering (parsing) and output
• Key to adapting Elasticsearch to other log formats
• Run logstash in logstash home directory as follows:
bin/logstash ––conf <logstash config file>
11
14. • General purpose query UI
• Javascript implementation
• Query Elasticsearch without coding
• Includes many widgets
• Run Kibana in browser as follows:
http://<web server ip>:<port>/<kibana path>
14
15. /** @scratch /configuration/config.js/5
* ==== elasticsearch
*
* The URL to your elasticsearch server. You almost certainly don't
* want +http://localhost:9200+ here. Even if Kibana and Elasticsearch
* are on the same host. By default this will attempt to reach ES at the
* same host you have kibana installed on. You probably want to set it to
* the FQDN of your elasticsearch host
*/
elasticsearch: http://+"<elasticsearch node IP>"+":9200",
15
23. • Designed to work in a trusted environment
• No built in security
• Easy to erase all the data
curl –XDELETE http://<server>:9200/_all
• Use with a proxy that provides authentication and
request filtering such as Nginx
– http://wiki.nginx.org/Main
23