If you’re reading this, odds are you are under attack. Your Web server is being crushed under the extraordinary load of thousands or even millions of bogus requests. How do you deal with it?
Before we jump into that, a quick definition, courtesy of Wikipedia:
A distributed denial of service attack (DDoS) occurs when multiple systems flood the bandwidth or resources of a targeted system, usually one or more web servers. These systems are compromised by attackers using a variety of methods, though most commonly it’s due to malware or trojan attacks, either pre-scheduled or triggered by an external event.
There are a number of ways to deal with a DDoS attack, but to find out best practices, I checked with a top sysadmin, who offered this advice based on a recent experience he had with a client site:
As you may know, one of our ecommerce customers suffered a devastating DDoS attack which started early Friday morning and lasted until we finally contracted with a DDoS mitigation service late Saturday night. The service was implemented by pointing the “A” record for the domain to their server. The cost of the service was $350 per month plus $150 setup.
The effects of the attack stopped instantly once DNS resolved to their IP.
There are several of these companies around. All seem to have about the same price structure for the same services. I didn’t do much research but choose the first one to respond on a Saturday. All likely had support available on the weekend but sales staff apparently get time off.
Gathering the information on the attack has been somewhat difficult since during the attack our server was virtually shut down. In fact the only way we were able to get access to shell was to change the DNS to point to a different server then establish an ssh login and run “top” or something similar to keep it open when we switched it back.
It is interesting to note that the attack followed DNS according to the TTL set. We had had it down to 10 seconds as we were in the process of moving the account from one server to another when the attack occurred. The attack followed the DNS within 10 seconds or less. There was very little residual attack activity after the DNS switched and that stopped within a minute or two.
So here is what we think we encountered based on information from the colo support’s anecdotal observations and information from the mitigation service after we blocked the attack. It is interesting to note that the mitigation service does not log activity so the information they provided is from spot observations rather than reliable metrics.
1. Incoming IPs were estimated at reaching as much as 100/sec. Each IP attempted to open between 5 to 25 connections
2. IPs were from all around the net but a sufficient number were from the US so that trying to isolate by country was useless (the customer was not regional but does business across the US).
3. At any one time the number of unique IPs was between three and four hundred. Since the software on the mitigation servers expires the IPs it blocks after 15 minutes and we did not see many instances of the same IPs recurring, the IP pool must have be in the thousands.
4. The sustained attack was estimated to be less than 20MB/sec however an accurate measurement is not available.
1. Apache based access control lists proved useless. Apache simply ran out of processes within the first wave and stopped before it could even begin to reject connections. Turning keep alive off and other tuning tricks might have helped if the attack was significantly less but provided no relief as apache was simply swamped in the first few milliseconds.
2. When traffic was moved to a more powerful host it might have been possible to use the firewall by using a script to build the IPtables, however the number of entries in the IP table are limited and the unique IPs exceeded three hundred at a time. That fact plus the overhead of the script and the constant updating of the tables would have brought the server to its knees and the excess IP would still have flooded Apache.
Solutions such as running http as server type inetd (a significant performance hit in itself) with a massive deny list or a very restrictive allow list in the hosts.allow file might have given us back control of the server but would have done little to bring customers back since the store sells country wide and if you arbitrarily block massive ranges of IPs you block customers too. We could have spent days trying to identify safe ranges and never succeeded.
3. There is an additional complication in that the traffic looks like normal traffic with proper handshake and all. Scripts that flag IPs based on the number of connections would only be partially effective since the first few connection would be allowed until the max was reached. The techs at the mitigation service revealed that they relied on pattern matching and signatures which means to be totally effective scripts would need to be constantly updated by someone or some other service similar to virus and spam protection schemes.
4. This level of attack would probably be sustainable by a server with a reasonable firewall implementation in place, although some performance degradation would likely be evident.
5. Finally we were told by knowledgeable sources that there were multiple attacks of this kind against other websites that are in the same business as our customer. According to our sources this is not uncommon. The attacks are not random mischief but are paid for by someone to whittle down the completion. Also this was really a modestly sever attack. I’m told that attacks of hundreds of time more severity than we saw happen regularly.
Attacks are not likely the result of anything a website owner may have done. You cannot avoid them simply by not offending anyone. If you have any standing in the search engines you will get targeted when someone decides they want the traffic your industry is serving.
You can not wait until an attack occurs to plan for it. Moving to a better hosted server and adding protocols to mitigate attacks will help with smaller attacks and may give you early warning of a larger attack. This attack started sporadically with reports of the server being slow several days before. We do not know if that was testing or if there is just some normal ramp up to an attack like this.
Smaller server setups simply do not have the resources to fend off even a moderate attack. If you can’t justify putting each ecommerce site on its own managed private server (yeah that’s going to happen) then perhaps getting an MPS and stacking several accounts on each with separate IPs might be a solution. Hopefully only one of your accounts gets attacked at once and perhaps the MPS firewall could be made to be effective at protecting all of the sites (needs a little engineering I suspect).
Better still create your own DMZ and front end all of your accounts with a robust firewall appliance (probably not as easy as it sounds).
All in all this has been a wakeup call for us. It is without a doubt a topic that we will give great attention to from now on. I hope this post will be helpful to you all and I thank you all again for your suggestions and offers of help.
Reproduced with permission.