Benchmarking page loading

Submitted by Larry on 16 March 2008 - 4:21pm

One of the major changes in Drupal 6 (where "major" is defined as "worthy of a mention in Dries' keynote") was a new feature of the menu and theme hooks. The newly introduced "file" and "file path" keys in those hooks' respective retun arrays. allow them to define files that get included conditionally, only when needed. In theory, that should be a big performance boost; page handlers are virtually never called except for on the page they handle, so loading all of that code on every other page is a waste of CPU cycles. Of course, there is also the added cost of the extra disk hit to load that one extra file we need. Modern operating systems should do a pretty good job of caching the file load, but that may vary with the configuration.

So just how much benefit did we get from two dozen fragile patches that were a glorified cut and paste? And is it worth doing more of it? Let's benchmark it and find out.

Test environment

Our test computer is as follows:

Lenovo Thinkpad T61 on AC power
Intel Core2 Duo 2.2 GHz
Kubuntu 7.10 "Gutsy"
PHP 5.2.3

On the software side, we're using a stock Drupal 6.1 with no additional contrib modules except devel. To prime the pump, we'll use the devel generate module to create 50 story nodes (which get promoted to the front page) with 5 comments per story and no taxonomy terms. That may not be a "normal" site, but it is appropriate for our testing needs as we're looking specifically at the bootstrap process, which should be reasonably constant. Naturally we do not have the page cache enabled, as that would bypass the whole purpose of these tests anyway.

In theory, the benefit of the page split should increase the more modules are installed (assuming they are all properly split). The benefit should also be constant; the number of milliseconds saved should be the same no matter how slow the rest of the page is. We'll therefore run a couple of tests, on both typically fast and typically slow pages:

  • Create content page, page split, default core modules enabled
  • Create content page, page split, all core modules enabled
  • Front page view, page split, default core modules enabled
  • Front page view, page split, all core modules enabled
  • Create content page, no page split, default core modules enabled
  • Create content page, no page split, all core modules enabled
  • Front page view, no page split, default core modules enabled
  • Front page view, no page split, all core modules enabled

To "unsplit" a module, we will simply move all of the code from the include files to the main .module file. We will also modify the menu_execute_active_handler() function to not check the "file" key. The extra bookkeeping of recording the file to include is fairly small, and only used inside the rarely-called menu_rebuild() process so it is of no concern to us. The front page view is for 10 nodes.

We will use Apache Bench, per recommendation on the Benchmarking HowTo. Note that we are not disabling the MySQL query cache here either, as that is also not a representative case of a real-world application (we hope). We'll set concurrency to 1, and iterations to 500. We're also running everything -- Apache, MySQL, and the AB -- on the same computer to eliminate network latency as a factor. We'll also enable the devel module in all tests so that we can get a report of the memory used by the page.


Running ab in various configurations, we get the following results:

View Page split Modules Req./sec Time/req. (ms) Memory (MB)
Create content Yes Default 10.40 96.173 5.61
Create content Yes All 7.38 135.527 7.67
Front Yes Default 9.01 110.943 5.79
Front Yes All 6.52 153.369 7.96
Create content No Default 8.36 119.645 7.14
Create content No All 5.83 171.670 10.27
Front No Default 7.31 136.781 7.45
Front No All 5.16 193.927 10.68


Doing a little division, we arrive at the following somewhat patterns:

  • Across the board, there is an approximately 20% increase in requests per second.
  • Across the board, there is an approximately 20% decrease in time per request.
  • Across the board, there is an approximately 25% decrease in memory usage.

I had expected more variation for different types of page request, but the numbers were all reasonably consistent. The memory usage is particularly important, as it means the page split is valuable to opcode cache-using sites, too. While those don't need to reparse code on each request anyway, they still need to load it from the cache. A 25% decrease in memory usage means a 25% increase in the number of requests you can serve at a time before you spend money on more RAM.


From the above data, I conclude that yes, the page split was a very good thing, and current efforts to expand it are worthwhile. Of course, other PHP developers have in the past cited annecdotal evidence that moving all code into a single massive file before executing it improves performance, too. So what's the difference?

Usage patterns. Code that is executed very often is faster if it's in fewer big files. Code that is executed rarely is faster if it is in more, smaller, conditionally-included files. So what's the definition of "very often" or "rarely"? I don't think there is a firm number one can give for that; it will vary greatly by the project. In Drupal's case, page handlers, forms, and "registry hooks" like hook_menu(), hook_theme(), and hook_views_default_views() are all "rarely". hook_boot() I think qualifies as "very often". For others, I suppose we'll have to experiment and find out.

For op-code caches that use shared memory, like APC, I think there is no difference in speed or Apache memory usage.

The reason the memory usage is high when NOT using an op-code cache, is that the source code has to loaded, parsed, and tokenized before getting executed. For an op-code cache, this is done only once for the files involved and the tokenized version is stored in shared memory.

So, the file split will benefit those on shared hosts the most, but will have little or no effect on large sites that use op-code caches.