By Nabeel Saeed
This presentation explores the current DDoS attack landscape, it covers the basics of DDoS attacks, current trends including the most recent results from the newly published 2015 Imperva Incapsula DDoS Report. It also discusses a detailed analysis of one of today’s modern, multi-vector DDoS attacks. While dissecting this DDoS attack, this presentation explores the anatomy and timeline of the attack, as well as the steps used to mitigate each phase of the assault. This session will close with a review of the aspects of effective DDoS protection solutions used to combat these sophisticated denial of service attacks.
So, what exactly is a DDoS attack? At it’s most simple form, it is an attack that tries to keep internet users from accessing an internet connected resource, typically a website. The most common way to do this is to take a large group of infected computers called a botnet and use them to send lots of traffic at a single target. As the sheer bulk of this traffic makes its way across the internet to the target, it usually creates a bottleneck preventing any other traffic (the legitimate traffic) from reaching that website. These bottlenecks usually happen at the internet connection a website operator has purchased from their ISP. Once this link is saturated, the website will become unavailable and appear offline.
Let’s look at some trends in the DDoS attack landscape. According to the verizon data breach report, there were twice as many attacks in 2014 as the year prior, with the average attack size in 2014 being 15Gbps compared with 10 Gbps in 2013.
There are really two main types of DDoS attacks . Network layer attacks, and application layer attacks. Network layer attacks are typically volumetric in nature and try to consume all of the bandwidth available to a website or other internet connected resource. You can think about this type of attack like “Clogging the pipe to a website”. Once the pipe is totally saturated, it can no longer be used for communication and the website will appear offline to website visitors.
Application layer attacks on the other hand are typically more targeted and more intelligent. They tend to be smaller in size and aim to overload a web server, or database causing it to crash, thus taking the website down with it. The concept to remember here is “Overloading the server”.
Now that we know what a DDoS attack is, and how they work, who are launching these attacks? There are four main groups:
Hacktivists –looking to make a point. It could be your company’s industry, political stance, a blog post, a quote in the media by an exec, etc. Whatever it is, Hacktivists are using hacking for activism and your site my somehow be involved.
Competitors – looking to take your site down at an opportune time to gain an advantage. Think about taking an ecommerce site offline during cyber Monday. That would drive consumers elsewhere to fulfill their needs.
Extortionists - This group is looking to get ransom money and will take your website offline until you pay them to stop the attack.
Vandals – unfortunately there are also sometimes the attacker is just looking for notoriety or to just cause general malice.
Lastly, before we look at the actual attack, let’s talk about how often attacks happen and how much they cost. This data is from Incapsula’s DDoS impact survey. We found that 45% of all organizations are hit with DDoS attacks. Of that number, 75% are attacked more than once, 91% were attacked within the last 12 months, and 10% are attacked on a weekly basis.
In terms of financial impact, We found that the average DDoS attack lasts 12 hours and costs $40K per hour, leading to a total of roughly a half million dollars of impact. This of course includes things like loss of revenue from website downtime, brand damage, collateral damage to other equipment, support costs, etc.
Now that we’ve got the basics out of the way, we’ll spend some time to talk about a very sophisticated multi-vector DDoS attack that was launched at one of our customers. Throughout this section, we’ll talk about what happened in each of the five stages of the attack, compare it to trends in the DDoS attack landscape, and tell you how we mitigated that portion of the attack.
First off, we won’t be discussing who the target was. What I can tell you is a little bit about the nature of the business. The target company is a very successful software as a service platform in the online trading industry. This is important for several reasons.
SaaS products online availability directly equates to their revenue. If the site goes down, so does the product, and their revenues source.
SaaS products generally are built in a robust way to serve the transactions of millions of customers.
The final thing to note about this customer is they were in a multi-tenant environment which meant that when they were hit with the attack, it impacted other people using the same hosting datacenter. This is what is known as being a “noisy neighbor” and can quickly get you kicked off your hosting provider when large DDoS attacks come.
The first phase of the attack was a simple SYN Flood. This volumetric DDoS attack is a typical, run of the mill attack. It’s a blunt weapon that is easy to create, if you have the resources, and easy to defend against. It peaked at 30Gpbs and the only really interesting thing about the attack was that it didn’t use any form of amplification, which means that the attacker had access to significant resources.
Taking a quick look at Syn flood stats from our Q2 DDoS trend report we can see that they are one of the top DDoS attacks, both in terms of frequency and size. We’ve split SYN Floods into two types, Large and normal SYN floods and the only difference between the two is packet size. Interestingly, Synfloods make up the two of the top three most common attacks and two of the top three Largest attack types.
Now lets look at how to defeat this type of DDoS attack. As I said before, SYN floods and other types of volumetric DDoS attacks are essentially blunt weapons which use a huge amount of traffic to bludgeon targets. The first step to defeating these attacks should be to spread the load across many scrubbing centers. This creates a “many-to-many” defense strategy instead of a “many to one” strategy.
Within each scrubbing center we use dedicated scrubbing hardware to deal with the attack. We use a customized hardware solution called the “Behemoth” for scrubbing. It’s a propietary platform capable of handling 170Gbps of capacity per appliance which is used to aggressively blacklist of the attack sources and attack traffic.
As the SYN flood subsided, the site was hit by Phase 2 of the attack, another volumetric attack, this time an HTTP flood that was 10M requests/second, which targeted several specifically chosen resource-heavy pages. This type of attack is frequently used as a smoke screen or diversionary tactic while hackers try other hacking attempts.
The interesting thing about this phase of the attack is that it didn’t end. In fact, it persisted for weeks even as the other phases of the attack were in progress.
Throughout Q2, we saw shorter durations of app layer attacks than previous quarters, but with a high likelihood and frequency of return. More than half of DDoS targets are hit again within a 60 day timeframe.
Phase two was certainly more advanced than the SYN flood that the attacker used in phase 1, however this attack was easily mitigated with our client classification engine, which is essentially an anti-bot module that analyzes traffic in real time to differentiate humans vs bots and to classify them by purpose, identifying and blocking DDoS bots. Good anti-bot tools make wonderful additions to DDoS protection products for this reason.
The use of non-intrusive, or transparent challenges will help minimize false positives. Humans shouldn’t be seeing these challenges and its important to create an environment where helper bots are welcome while bad bots are blocked. i.e. don’t be iron fisted with bots.
Next on the target's list were the website's AJAX objects. This was a smart choice, as some bot filtering methods like i just described (e.g., JavaScript challenges) will not be completely effective in protecting AJAX resources from App layer attacks. Moreover, attacking AJAX objects ensures direct impact on the database - typically one of the most sensitive chokepoints.
The fact that the targeted objects were located in a "registered users only" area also says a lot about the attacker's familiarity with site's architecture, and also hints at the level of reconnaissance that preceded the attack.
In a tactic more usually seen in an APT the hacker has scoped out the applixation, discovered a heavy ajax resource within the authenticated part of the site.
Then he needed to get in, authenticate, and use that ajaz resousrce as the attack
And to do that, he actually used hijacked cookies to make the heavy ajax requests
We used the targeted nature of this attack to our advantage during its mitigation. By limiting the number of “suspects” to a small sub-group of registered-only users, we were able to filter out malicious bots by Visitor Reputation while identifying Abnormal Behavior Patterns. In this case the abnormal behavior was fairly easy to spot because it all involved unusual usage of these AKAX objects.
Phase 4 of this multi-vector attack, is where things got really interesting. The incoming attack was almost totally transparent it was only noticeable by its impact on the site's performance. It looked like an abnormally high spike in human traffic.
To prevent damage to our client we selectively deployed CAPTCHA challenges - a relatively low-tech and somewhat disruptive mitigation tool which we only use as a last resort. Even in this case, the CAPTCHA’s were only presented to a very narrow group of visitors (~1%), who sported a specific configuration that matched that of the attacker.
It worked but it wasn't the solution we wanted. While looking for a more transparent approach to mitigation, we became aware of people who were trying to reach us via our Social Media channels and support ticket, complaining about how Incapsula had "invaded" their desktop browsers.
---
Real Browsers attacking – Seems like real traffic
Trojan – spread across a computers opeing real browsers
There was a bug in the trojan that caused web pages to open up with our Error/Captcha page caused – great fun for our support environment
Once we were able to identify the trojan powering the attack our security team was able to reverse engineer it and create a new signature to block the botnet and attack vector for all of our customers.
Phase 5 was the last phase of the attack. Basically in phase 3, the attacker used stolen cookies to gain access to an authorized part of the website and launched the attack from there. In Phase 4, the attacker used a network of infected computers spawn browsers and point them at the target, in Phase 5, the attacker took it a step further and used infected browsers. This infection was happening completely within the browser and thus had access to the sessions, cookies, and capabilities of the browser.
This attack lasted for 150 hours, using 180,000 IP addresses and 861 variants to general 700M requests per day.
Before discussing its mitigation for a minute I’ll comment that even though Phase 5 of this multivector attack was able to generate a devastating 700 million requests per day, some of the large attacks of this nature we saw during Q2 of this year generated as much as 15.5 billion requests per day.
Similarly to how we handled the PushDo bot example from phase 4, the key to defeating the Phantom JS kit was to first identify the bot, then reverse engineer the software, and create a signature to block it.
The common theme of the last several phases of the attack we’ve explored throughout this webinar was bots. The attacker tried increasingly more sophisticated methods of evasion before eventually being thwarted. The trends we saw throughout Q2 in part mimic this as we saw botnet owners using a wider variety of assumed identities to avoid detection. One pitfall is that unlike Phantom JS, these identities tended to still be largely “primitive” meaning they would be low hanging fruit for anti-bot tools because they don’t even posses the ability to support cookies or handle JavaScript. The take away from this is that any DDoS protection solution you use, should include the ability to accurately identify bots as this will increase it’s accuracy.
The attack we just reviewed was atypical in many ways however there are still many lessons that we can learn.
First off, DDoS attacks are becoming more like APTs or advanced persistent threats. Attackers are doing more research to target the soft spots in your environment and you need to be prepared to deal with an attack that may include multiple changing vectors.
DDoS attacks can last for weeks and they may also start and stop seemingly at random. Your defense tactics need to be able to handle that gracefully. Lastly, Don’t expect a silver bullet, even the best vendors may need to implement custom rules to help protect your application from highly customized attacks.
DDoS attacks are growing constantly. Largely in part to cheap rentable DDoS-as-a-service platforms which require no expertise and can be rented by the hour. That means you need to be prepared to deal with very large attacks, in the range of tens to hundreds of gigabits per second. The most cost effective way to do this is typically to work with a DDoS Protection provider that has a large scrubbing network.
Let’s be clear, DDoS attacks are your problem. Not that of your website visitors or customers. In a perfect world, they shouldn’t know that the attack is even happening. Many DDoS solutions on the market are quick to resort to the use of CAPTCHAs or holding screens which is less than ideal because it impacts website usage for real, legitimate humans. Instead try to find solutions with transparent challenges that can identify DDoS bots without the need to interfere with human visitors.
This piece of advice might seem counter intuitive when compared with my last suggestion, which was to use transparent challenges instead of CAPTCHAs or hold screens, to identify DDoS bots. Good DDoS protection solutions should be able to detect and mitigate the vast majority of attacks with no problems, but for some unique circumstances it might be possible to have a false positive and misinterpret a human for a DDoS attacker. If this should arise, the solution needs to fail open with something like a CAPTCHA which allows a human to prove it’s humanity. This screen should also provide the customer information on how to contact the vendor’s support and complain if they are affected. Humans are human after all, and if they are inconvenienced, they should know why and be able to express their displeasure with it.
A few words on automation. Automated, always on solutions should be used whenever possible especially for web assets running HTTP/S because they are by far the most commonly targeted assets. You’re DDoS mitigation solution should always be monitoring for attacks and be able to instantly mitigate them should they be detected. That doesn’t mean that websites should be in constant lock down. Monitoring doesn’t equal enforcement. DDoS rules should only be used when needed, the rest of the time websites should be protected by other solutions like a web app firewall, but otherwise unhindered.
Whether you decide to try to tackle the DDoS problem yourself, or work with a provider to deal with it for you, here are some things to keep in mind.
Ensure you have enough network capacity. Today’s DDoS attacks are larger than ever and most people don’t have hundreds of gigabits per second of idle bandwidth at their disposal. Be prepared for bigger attacks.
Invest in technology, Rapid analysis tools, the ability to quickly patch and implement custom rules and persistence will be key to defeating complex attacks. Make sure you have the help of researchers who are up to speed on the latest attacks and can bring that knowledge to your defense when needed. Picking a solution with a 24x7 SOC that helps you monitor your environment is crucial incase things go sideways.
What to learn more about Incapsula, or the latest in DDoS trends? Feel free to visit us at www.incapsula.com to download a free copy of our report, or to start a free 14 day trial of the product.