View Categories

How To Improve Apache Performance for Larger Web Applications

  • Aug 08, 2012
  • 0
  • by A2 Marketing Team

Apache has a bit of a reputation for being slow and unwieldy. Is that reputation deserved? Well, yes and no, but it doesn’t matter. If you have people with web applications that you can’t give root access to, you’re probably stuck with it. Let’s talk about a few little ways to keep your server ‘happy with Apache’. That almost rhymed!
This article is about web applications, so let’s lay down some framework. You’re going to be using a database system and some programming language tied in, through fastcgi, mod_fcgid, the PHP DSO, or what have you. We’ll just be discussing Apache here.

So, what are the negative effects Apache can have on a server? What is it that we’re trying to mitigate? Apache is almost never very CPU intensive. That’s going to come down to your programming language and implementation depending on your application, so we can ignore that. Similarly, compared to your database(s) we can pretty much write off I/O limitations as negligible. At that point, we’re left with bandwidth and memory.
Apache can be bandwidth limited, though it’s rarer these days. If you suspect that you are, and you have some CPU cycles to burn, try mod_deflate! It can be configured to compress text, HTML and/or JavaScript. Depending on the complexity and relative size of your application’s client-side code, this can actually lower your bandwidth usage significantly, and it’s cheaper than a faster uplink! You can take a look at the nitty-gritty details here (version 2.2 assumed): http://httpd.apache.org/docs/2.2/mod/mod_deflate.html

With bandwidth out of the way, we can get to the real big one; memory. Depending on your selected MPM, which is probably prefork, Apache processes can get pretty big in RAM over time. Considering the additional memory requirements of the database management system and the application itself, the dangers of swapping memory out and dragging your server down grow to unpleasant proportions.
Our memory problem can be boiled down to two fundamental issues:

1) Apache prefork handles each incoming request with a process.
2) Apache very rarely *ever* frees memory.
The first issue means that there have to be at least as many Apache processes as there are incoming requests. Bear in mind that most web browsers will make parallel concurrent requests, so one person loading one of your web pages could easily generate 5 concurrent requests, each exclusively using an Apache process. The second issue means that as processes spawn and grow, they won’t get smaller. Even if your traffic drops back down, Apache won’t shrink the processes that it keeps. On top of that, you don’t want your web application to leak memory into a persistent fastcgi module or have the PHP DSO!

 

First, you’ll want to limit the number of processes Apache can fork. You set this with the MaxClients directive in your httpd.conf. This is where the science starts turning into art. If you don’t have a high enough MaxClients, you’ll have requests building up and there will be a big delay when someone clicks your link before the page starts trickling in. If you have it set too high, Apache will happily run you out of memory.
What you’re going to want to do is come up with a general starting point. Take your total server memory in MB. Cut that in half if you’re using PHP or something Apache is managing, or a quarter if you’re using Ruby, Python, etc. (If your DBMS’s needs are high, tone it down some more. This is just a generic starting point.) Now divide that by the size of your average Apache process. If you don’t have an average, go with 30-50MB. Or just play with it! Anywhere in the 60-120 range would be sane for a starting point.

 

Now that you have a limit on the number of processes, let’s try to keep those processes slim! For this we’ll be using the MaxRequestsPerChild directive. What this does is specify a limit on the number of requests a child process will handle. After any given process handles this number of connections, it will die and a new one will be created in its place when necessary. When the processes die, it’ll give its memory back and the new one will start small again.
The default value for MaxRequestsPerChild is 10,000. That’s fine for serving static content, but it’s ridiculous for a server running a reasonably sized web application. The trade off here is that there’s overhead involved in Apache forking new processes. That overhead is nowhere near your server swapping and is pretty minimal. Go ahead and set this to 200. That should be high enough to keep the overhead from hitting you too hard, but is low enough to quell all but the most egregious memory leaks.

 

These little things should be a good starting point to getting your server running smoothly and snappily. At least you can start looking at your database system or code next!