Experiencing some slowness this morning I checked the access log and found that IP address 184.108.40.206 has been banging hard on us, around 400 accesses/second, many to the login page. I added code so that the software now exits without delivering any pages for his IP address. If this isn't enough I'll disable his access at the Apache level. Please keep me apprised of any performance issues.
As that's a DoS attack, can't your hosting company block it at their firewall for you?
Surprisingly, no. When the website went offline some months back it was because of a DoS attack that overwhelmed the webhosting company's servers, so they shut our website down. Here's their initial response:
Superb Hosting writes:
I have enabled the IP for the time being. If ddos attack occurs, we will have to null route the IP again.
Since, this is a self managed server, unfortunately you will be responsible for managing security in your server.
Please see our AUP for below regarding dos:
12.2.4. Denial of Service Under no circumstances may SI's systems be used to gain access or deny access to a system or attempt to gain or attempt to deny access to a system without the permission of the system's owners (or rightful users). A Denial of Service (DoS) attack is designed to disproportionately consume the resources of a system in order to reduce its ability to serve its function. Under no circumstances may SI's network be used in DoS attacks. Abnormal traffic shapes may cause detrimental effects to other users and/or the network, and, in extreme cases, may have DoS attack-like effects. 12.2.5. You are solely responsible for maintaining the security of access codes, authorization codes, and passwords.
With that being said, if you want security audits done on your server we can do for 8 support credit or OS hardening services are also offered by our PSD department.
It can also be that server is hacked or it is running an an outdated and un-patched OS.
Having chapter and verse cited to me wasn't that helpful, so I asked for more details:
Superb Hosting writes:
The DDOS received affected other servers. When our routers are inundated with this much data, normal traffic destined for other servers may be dropped, which is unacceptable. The DDOS can be received or transmitted and in this case it was received by your server. There is very little that can be done to mitigate attacks like this without using a third party DDOS mitigation service.
When I inquired further they responded:
Superb Hosting writes:
Yes, your server was the target of a DDOS. I am uncertain as to why it was suggested to have a security audit. You are correct in saying that there is very little that can be done for a DDOS. It is uncommon for us to have a DDOS, maybe 3 times a year some server or another falls victim within our company. Generally, the DDOS is a result of someone getting angry enough about some content on a site and they want it shut down for a laugh.
I don't know why they refer to a DoS as a DDOS. I couldn't figure out what the extra D is for. My reaction to all this was that it seemed strange that they're as helpless in the face as a DoS attack as I am.
The new RAM hopefully brings this episode to a close, and in my subjective assessment it seems a bit peppier, too. I don't know why 2GB of RAM suddenly became insufficient after years with no problems, but my guess is the MySQL database, which grows with every new message. It's probably nearing 1.7GB now, I won't know the exact number until the next backup.
My vague understanding of the specific cause of the 503's is that Apache, using Fastcgi, will occasionally try to start new processes to service new requests (each process can successively handle many requests over time - I'm not sure why these processes don't persist indefinitely, but they don't last forever), but if the system is out of resources then the request just gets dropped on the floor, resulting in a 503 Service Unavailable error at the user end.
I know the problem persists, so thank you, everyone, for your patience.
I just wanted to let everyone know that I just increased the memory limit for PHP processes. This actually makes sense as a possible cause of the problem. As the MySQL database for the site has grown, it is possible that there is an impact on PHP processes that query the database.
It's also possible that the mysqld process (services MySQL queries) needs a higher memory limit, so that's the next candidate for something to try.
There's no need to post here if you observe the problem. I use the site, too, and the site gets slow a while before we start getting 503's, so I'm bound to notice if the problem isn't really fixed.
I have been working on the performance problem and have not been able to find a solution. It comes and goes without giving any hints why. It affects only dynamic pages, so that implies a problem with PHP or MySQL, or in their interaction with Apache. Restarting the daemons helps for less than a minute, and the same is true of rebooting the server. Possibly there's been a break-in and someone is co-opting server resources, but if that's the case then they're hiding their tracks very well.
Another possibility is that our server is getting long in the tooth and needs to be upgraded. Software upgrades affecting the control panel (the server's control panel, not EvC's) occur on an ongoing basis, so possibly the new control panel and the older software are not playing well together. Software upgrades cost nothing or almost nothing (except time), but it can be risky putting the newest software on old hardware. Upgrading to a new server, which will have to be done eventually, would be costly.
Though I'm replying to Tangle, this is just to continue the chain of messages about website performance.
I filed a ticket with our webhost provider on May 30 when the problem became most severe. They made some simple recommendations but nothing revelatory or even that helpful. Other than monitoring I have not done a thing, but I have not since then noticed anything more than the occasional seconds-long load time. Has anyone else experienced persistent problems since then? Naturally I'll continue keeping my eye out for performance issues, but if no one's having performance issues then I have to say that the simultaneity of filing a ticket and the problem going away seems a bit of a suspicious coincidence. I wonder if perhaps, without saying anything in the ticket, that they ran some kind of maintenance software just as a precaution to make sure it wasn't anything on their end, maybe some "network table cleanup" program, who knows. The problem being at their end *does* seem unlikely to me because static webpages from the site are served immediately, but networks are very complicated these days with a lot of active processing, so I don't know...
Hopefully the performance issues are gone, but please continue to report problems here.
Yes, thanks, I experienced that one, too. Investigation revealed a network problem of unknown cause and origin. Ping revealed the outage was actually intermittent, with connectivity to the server reestablishing itself for a few seconds several times a minute. Because of the inconsistent connectivity I was unable to get a helpful tracert, but I was able to connect to the webhosting company's website, which is at the same server facility as our server, so it had to be the webhosting company's own internal network problem and not related to our server. The problem seemed to last about five or ten minutes.
I'm experiencing slowness right now but no 503 errors, started around 8 AM ET. Server load is low. There's nothing suspicious in the log. A tracert was fine.
I've investigated the reports posted above, nothing in the logs. I still suspect something related to the processes/daemons on the server. I'll continue monitoring and hope something eventually turns up.
We have been consistently fast for at least a few days. I had filed another ticket with the webhosting company about a week ago, and in reaction to data I provided they blocked a number of IPs, not just from our server but from their entire network. The webhosting company thought the problem similar to a Slow Loris attack, where a number of incomplete requests are submitted to a website causing the server to keep the connections open waiting for the requests to complete. Eventually the number of requests being waited upon overwhelms the server and you begin getting slowness and finally timeouts.
These kinds of attacks don't bring any benefit to the hacker, unless their goal is to make a website unresponsive. I don't know why the RIPE network (the originator of all the open connections) would care about us.