Quick Note
• This talk isn’t about the full Incident Response
process
• We aren’t going to cover policy/reporting/etc
• We are here to show some Kung Fu with
tcpdump
Tcpdump for Network Forenscis
• This presentation will show you how you can
leverage tcpdump, Linux utilities, and BPFs to
quickly rip through pcap
• Understanding TCP/IP communications along
with common attack patterns allows an
analyst to profile suspicious behavior
• With any role in security it is critical to be the “Hunter”
• You need to go beyond the automated tool
– Write your own tools and scripts to address gaps in tools
– Be able to manually perform you job function
• #Don’t_Rely_On_Automated_Tools
Now for the boring stuff….syntax and
some background stuff
Basic Syntax
• Write to a file:
– Tcpdump -ttttnnAi any -s0 -w file.cap
• Read from a file:
– Tcpdump -ttttnnAr file.cap
• Command Switches Broken Down – Read the Man page:
– -tttt: formats the time
– -nn: prevents ports and IPs from being resolved
– -i: interface to listen on
– -r: read a pcap file in
– -A: gives ASCII output
– -s0: specifies the snap-in length so tcpdump grabs the full
packet instead of only 96 bytes
Basic Syntax Cont.
• -c: Useful switch to set a packet capture limit.
• The command below sets a packet capture limit
of 5000. This is useful to avoid having tcpdump
processes going too far.
– tcpdump -ttttnnAi any -s0 -w file.cap -c 5000
• You may also find it useful to launch your
tcpdump process via a screen session, or nohup
the process to avoid it closing if your connection
to the server dies.
BPF Filters
• Berkeley Packet Filters (BPFs) allow you to
filter for packets for interest
– host: filter based on a specific host
– net: filter based on a specific network range
– tcp: match only packets that are TCP
– udp: match only packets that are UDP
– port: filter based on a specific port
– Boolean Logic (and, or)
More Advanced BPF Syntax
• Match HTTP GET requests:
– tcp[20:4]=0x47455420
• Match HTTP POST requests:
– tcp[20:4]=0x504f5354
• Match TCP packets to network 10.0.0.0/8
– tcp and net 10.0.0.0/8
• Match TCP SYN packets to host 192.168.56.10
– tcp[13]=2 and host 192.168.56.10
Reading Pcap
• You can combine Linux utilities to help
summarize tcpdump’s output
• The first and most common is the “less” utility.
I commonly leverage it with “-S” to turn off
word wrapping to which is easier for me to
view:
– tcpdump -ttttnnAr pcap_file.cap | less -S
Tcpdump and Linux Utilities
• Many of the same techniques taught in our
bash scripting lesson can be applied to
tcpdump’s STDOUT
• Below is a quick summary of useful utilities:
– Grep / Egrep
– Awk
– Sed
– Sort/Uniq
Tcpdump and Linux Utilities Cont.
• Below is a quick example showing how you
can leverage grep with tcpdump output:
Tcpdump and Linux Utilities Cont.
• Below is an example of using sed to replace “GET” with “POST”
Tcpdump and Linux Utilities Cont.
• Here is an example of using awk to print just the 6th element
in the line:
Tcpdump and Linux Utilities Cont.
• Now we can use awk again to print just the IP and
not the port:
Tcpdump and Linux Utilities Cont.
• Finally we can leverage sort and uniq to summarize
the output:
Profiling Network Traffic
• When hunting for compromise it’s a good idea to
profile network activity
• This involves defining the legitimate traffic and
starting to look at the outliers
• Let’s talk a bit about what I mean by outliers:
– Systematic connections (TCP, UDP, DNS, Netflow)
– Odd domain names: aldjkafsdpoiadfpoiasd.ru
– Close to legit domain names: micosoftupdat.com
Profiling Network Traffic
• I normally profile enterprise networks using a
few different filters that grow to several
hundred lines
• I commonly break them down by:
– DNS filter – Profile outbound DNS servers
– Web filter – Profile web activity
– Everything else filter – I catch the rest here
Bash For Loop 1-liner
• Here is an really handy 1-liner I use all the time:
for i in `ls *`; do <command> $i; done
• This can help you automate many different
commands you might need to do over and over,
not just tcpdump
• I will often move more complex automation tasks
to Python
Incident Happens - GO
• What do you do when you’re dealing with a potential
compromise?
– Depends heavily on what we know and what we have access to touch
– Network traffic is one of the most powerful sources of data when
dealing with a compromise
• Assuming you know “Something bad is happening” how
would you start?
Hunting: DNS
• I normally start by hunting in DNS because I
personally found a lot of success with this
technique:
– NXDOMAIN/Loopback/BOGON Name Resolution
– Random looking: zaweqeoinadf.ru
– Close to legit: micosoft.com
– Timing: Always key – is this a machine? 1min,
5mins?
– Hits for known bad infrastructure
Hunting: Mapping Infrastructure
• Once you have 1 IP or Domain you should be able to map out more
badguy infrastructure
– Similar Whois Registrant Information
– Similar sounding domains (cnndaily.com aoldaily.com)
– Other domains pointing to same IP
– Other domains around known bad guy IP (.12 is bad, what about .13,
.14, .11?)
– Any additional subdomains?
– Other domains sharing that name server
– Historical view of what that domain pointed to? Bad guys reuse
infrastructure, what did that domain resolve to last year?
• Robtext, iplist.net, nslist.net, webboar.com, Domain Dossier,
Google, Virustotal, DNSDB, Edv-consulting,
Hunting: Outbound Connections
• Focusing on just outbound SYNs is another
effective profiling technique
• The goal with this technique is to figure out what
is normal and start to pick out the odd ball
connection
• I once found a SYN every 1 hour, looking into it
further it was an encrypted communication
stream to a badboy place
– Automated tools don’t do this well #Hunter
Hunting: Outbound Connections
• Here is a filter example for outbound SYNs:
– I may have it focus on odd ports, or try to weed out ranges to more
common ports “443/80”
Hunting: Scripting
• When hunting I find myself doing A LOT of whois lookups to
get info then create a filter so….I automated it with Team
Cymru’s Python whois module (tool available upon request):
Summary
• Don’t rely on automated tools
• Be the hunter - the one who finds what tools
miss
• Be flexible and able to write your own tools
when needed