Apache Bench: you may be using the timelimit option incorrectly

Aug 8, 2011
by Radek Burkat  

There is a non obvious default in apache bench that many people seem to miss when using the -t timelimit argument. This along with a required argument order to override the default seems to be causing benchmarks to run for a much shorter time than intended. There are lots of other tools for doing basic benchmarks (siege, httperf,etc ) but if you're using ab this is a heads up.

Most people who use these tools are familar with the idea of setting either the timelimit or the number of requests to test.

ab -n 1000 URL    - this does 1000 requests to the URL.  No problems here.
ab -t 60 URL - this always does requests to the URL for 60 seconds. WRONG

If you type ab without arguments you get a basic list of options including..

 -t timelimit    Seconds to max. wait for responses

Easy enough. What else could one possibly need to know to use this.

The man description of this option has a bit more info

-t timelimit
Maximum number of seconds to spend for benchmarking. This implies
a -n 50000 internally. Use this to benchmark the server within a fixed total
amount of time. Per default there is no timelimit.

This means that if you use the -t option ab will actually set the -n number of requests to 50k. If 50k requests take longer than the timelimit you specified, everything is fine and ab will terminate at the timelimit. If requests are handled fast, Ab will terminate when the 50k request mark is reached, possibly well below the timelimit you intended to test.
Example. If you are using a 60s timelimit and do not increase the -n parameter accordingly, anything over 50,000 / 60 = 833 req/s will be limited by the default requests and not the timelimit you intended.

I googled for some recent ab tests/results and many of the examples that come up are running into this ab -n default without knowing about it. I apologize for using these people's examples, but I use this simply to get the point across that this ab -t option is misused.

Ex1: http://www.slideshare.net/pmjones88/framework-and-application-benchmarking Presentation from osconn 2011 clearly showing low "Time taken for tests:" when intended 60s
Ex2: https://github.com/pmjones/php-framework-benchmarks - benchmarking project using ab -t incorrectly
Ex3: https://sites.google.com/site/snorkelembedded/benchmarks - in this case the ab output data is included and we can clearly see that the "Time taken for tests:" line states that the tests were run for a much less time than the intended 60s.
Ex4: http://doophp.com/benchmark - not accounting of -n on high req/s
and it goes on and on.

You can specify a higher -n to account for this, but you also need to have the correct argument order.

ab -n 1000000000 -t 60   URL          WRONG. This will still set the -n to 50k
ab -t 60 -n 1000000000 URL RIGHT

It's odd that you have to specify this in the correct order so to verify what's going on I looked at the ab source code. In the function that processes the command line args we see that the args are processed from left to right and glancing at this code it's clear to see that requests are always overwritten in the -t arg, so order matters.

            case 'n':
requests = atoi(optarg);
if (requests <= 0) {
err("Invalid number of requests\n");
}
break;

case 't':
tlimit = atoi(optarg);
requests = MAX_REQUESTS; /* need to size data array on
* something */
break;

Of course this argument default is explained if you RTFM. The ab output does also show "Time taken for tests:" clearly, but it seems this is easy to miss if you are not expecting it.

It would be nice if AB was updated to remove this -n default when using the -t option or at least increase it to something larger than anyone would EVER need. Maybe 640K. That really ought to be enough Wink




Some testing and ranting below...

So let's verify the above with some benchmarks. I really hate seeing benchmark info online because it's really hard to get right. There are so many variables that you need to be aware of and system configuration and tuning can be so variable. Even the tools that you are using to test have limitations and quirks You also really need to decide if you are trying to do a real world benchmark or a micro benchmark focusing on something specific. You can use siege and other tools to replay read/write traffic in high concurrency, or you could focus on a single specific test, or anything in between.
When I'm trying to understand or debug something I choose to isolate the test as much as possible, minimize latency, figure out where the upper boundaries are, and then later see how much things change by making the test more "real world" and maximize for throughput.
With this in mind I choose to use a single keep-alive connection for the test as this eliminates any tcp connection times, ephemeral port exhaustion during some of the tests, minimizes process swapping, etc. This test is done on a fairly tuned, single cpu, Intel(R) Xeon(R) CPU E31270 @ 3.40GHz. I specifically disable speedstep, or force frequency to max to make sure that all parts of the test run at a specified maximum frequency.
I also show the exact commands and their output. I run most of these test multiple time to make sure that the results are consistent, and show the middle one. That's a bit of a rant as none of that really matters to the point we're trying to show below, but it's good practice for consistency.

Let's start...

It's always nice to verify that the URL you're going to test is returning what you expect. I see so many people benchmarking 404s, redirects, script errors, etc.

# curl  -i http://localhost/helloworld.htm
HTTP/1.1 200 OK
Date: Thu, 08 Sep 2011 21:55:29 GMT
Server: Apache/2.2.3 (CentOS) PHP/5.3.6 mod_ssl/2.2.3 OpenSSL/0.9.8e-fips-rhel5
Last-Modified: Thu, 08 Sep 2011 07:27:04 GMT
ETag: "5f6d86-c-4ac68fcc22200"
Accept-Ranges: bytes
Content-Length: 12
Content-Type: text/html

hello world

So normally a quick ab command would be something like this...

 
ab -k -c 1 -n 100000 http://localhost/helloworld.htm
This is ApacheBench, Version 2.0.40-dev <$Revision: 1.146 $> apache-2.0
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Copyright 2006 The Apache Software Foundation, http://www.apache.org/

Benchmarking localhost (be patient)
Completed 10000 requests
Completed 20000 requests
Completed 30000 requests
Completed 40000 requests
Completed 50000 requests
Completed 60000 requests
Completed 70000 requests
Completed 80000 requests
Completed 90000 requests
Finished 100000 requests


Server Software: Apache/2.2.3
Server Hostname: localhost
Server Port: 80

Document Path: /helloworld.htm
Document Length: 12 bytes

Concurrency Level: 1
Time taken for tests: 3.549480 seconds
Complete requests: 100000
Failed requests: 0
Write errors: 0
Keep-Alive requests: 100000
Total transferred: 34200000 bytes
HTML transferred: 1200000 bytes
Requests per second: 28173.14 [#/sec] (mean)
Time per request: 0.035 [ms] (mean)
Time per request: 0.035 [ms] (mean, across all concurrent requests)
Transfer rate: 9409.27 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.0 0 0
Processing: 0 0 0.0 0 0
Waiting: 0 0 0.0 0 0
Total: 0 0 0.0 0 0

Percentage of the requests served within a certain time (ms)
50% 0
66% 0
75% 0
80% 0
90% 0
95% 0
98% 0
99% 0
100% 0 (longest request)

So this gives us 28,173 req/s , and there are no issues with any requests taking a long time. The problem is that this test only ran for a few seconds. Not something you want to claim as the equilibrium performance.

At this point we can bump up -n requests to run longer, or in most cases you'd use the -t option.

ab -k -c 1 -t 60  http://localhost/helloworld.htm
This is ApacheBench, Version 2.0.40-dev <$Revision: 1.146 $> apache-2.0
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Copyright 2006 The Apache Software Foundation, http://www.apache.org/

Benchmarking localhost (be patient)
Completed 5000 requests
Completed 10000 requests
Completed 15000 requests
Completed 20000 requests
Completed 25000 requests
Completed 30000 requests
Completed 35000 requests
Completed 40000 requests
Completed 45000 requests
Finished 50000 requests


Server Software: Apache/2.2.3
Server Hostname: localhost
Server Port: 80

Document Path: /helloworld.htm
Document Length: 12 bytes

Concurrency Level: 1
Time taken for tests: 1.774499 seconds
Complete requests: 50000
Failed requests: 0
Write errors: 0
Keep-Alive requests: 50000
Total transferred: 17100000 bytes
HTML transferred: 600000 bytes
Requests per second: 28176.97 [#/sec] (mean)
Time per request: 0.035 [ms] (mean)
Time per request: 0.035 [ms] (mean, across all concurrent requests)
Transfer rate: 9410.54 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.0 0 0
Processing: 0 0 0.0 0 0
Waiting: 0 0 0.0 0 0
Total: 0 0 0.0 0 0

Percentage of the requests served within a certain time (ms)
50% 0
66% 0
75% 0
80% 0
90% 0
95% 0
98% 0
99% 0
100% 0 (longest request)

By looking at the the output we see that, Time taken for tests is 1.7 seconds. We expected it to run for 60 seconds but of course this is where we hit that -n default limit. If you don't pay attention here, you may look at only the Requests per second: and incorrectly conclude that you ran this benchmark for 60 seconds and over this time yielded 28,176. req/s.

So lets try to adjust the default value. If we're getting about 30k req/s then over 60 seconds we would need to do 30k * 60 = 1,800k. So lets set our limit to a much higher 10M

ab -k -c 1 -n 10000000 -t 60  http://localhost/helloworld.htm
This is ApacheBench, Version 2.0.40-dev <$Revision: 1.146 $> apache-2.0
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Copyright 2006 The Apache Software Foundation, http://www.apache.org/

Benchmarking localhost (be patient)
Completed 5000 requests
Completed 10000 requests
Completed 15000 requests
Completed 20000 requests
Completed 25000 requests
Completed 30000 requests
Completed 35000 requests
Completed 40000 requests
Completed 45000 requests
Finished 50000 requests


Server Software: Apache/2.2.3
Server Hostname: localhost
Server Port: 80

Document Path: /helloworld.htm
Document Length: 12 bytes

Concurrency Level: 1
Time taken for tests: 1.722899 seconds
Complete requests: 50000
Failed requests: 0
Write errors: 0
Keep-Alive requests: 50000
Total transferred: 17100000 bytes
HTML transferred: 600000 bytes
Requests per second: 29020.85 [#/sec] (mean)
Time per request: 0.034 [ms] (mean)
Time per request: 0.034 [ms] (mean, across all concurrent requests)
Transfer rate: 9692.38 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.0 0 0
Processing: 0 0 0.0 0 0
Waiting: 0 0 0.0 0 0
Total: 0 0 0.0 0 0

Percentage of the requests served within a certain time (ms)
50% 0
66% 0
75% 0
80% 0
90% 0
95% 0
98% 0
99% 0
100% 0 (longest request)

Well we set the -n option to 10 million but we still only get a run of 50k requests. Of course this is because we put the arguments in incorrect order. Lets try again...

ab -k -c 1 -t 60  -n 10000000  http://localhost/helloworld.htm
This is ApacheBench, Version 2.0.40-dev <$Revision: 1.146 $> apache-2.0
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Copyright 2006 The Apache Software Foundation, http://www.apache.org/

Benchmarking localhost (be patient)
Completed 1000000 requests
Finished 1623764 requests


Server Software: Apache/2.2.3
Server Hostname: localhost
Server Port: 80

Document Path: /helloworld.htm
Document Length: 12 bytes

Concurrency Level: 1
Time taken for tests: 60.044 seconds
Complete requests: 1623764
Failed requests: 0
Write errors: 0
Keep-Alive requests: 1623764
Total transferred: 555327288 bytes
HTML transferred: 19485168 bytes
Requests per second: 27062.71 [#/sec] (mean)
Time per request: 0.037 [ms] (mean)
Time per request: 0.037 [ms] (mean, across all concurrent requests)
Transfer rate: 9038.51 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.0 0 0
Processing: 0 0 0.0 0 0
Waiting: 0 0 0.0 0 0
Total: 0 0 0.0 0 0

Percentage of the requests served within a certain time (ms)
50% 0
66% 0
75% 0
80% 0
90% 0
95% 0
98% 0
99% 0
100% 0 (longest request)

Now we see that ab is actually taking some time to run. Time taken for tests is 60.044 seconds which is what we wanted, and the Complete requests of 1,623,764 make sense. The latency and longest request are perfect. I often see benchmark results with the "longest requests" in the multiple seconds range. This is a clear sign that you are hitting some system, server, language module issues. This could be simple things like during the test run restarting apache processes/connections because you are hitting MaxRequestsPerChild or MaxKeepAliveRequestshave limits, ephemeral port exhaustion issues, php session garbage collection, etc.


EDIT: Someone messaged me about their concern that they can't reproduce the high req/s even with a concurrency of 10 on their EC2 instances. That's like saying, how come I can't go as fast in this school bus as you can in your sports car? I guess the answer is that you can't. Don't get me wrong, I use virtualization for lots of things when it makes sense, and I think it's just going to get better, but to compare it on a specific and tuned latency workload like this on dedicated modern hardware is just not fair.

Here are a couple of runs with a concurrency of 10, just for fun.

ab -k -c 10 -t 60 -n 10000000  http://localhost/helloworld.htm
This is ApacheBench, Version 2.0.40-dev <$Revision: 1.146 $> apache-2.0
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Copyright 2006 The Apache Software Foundation, http://www.apache.org/

Benchmarking localhost (be patient)
Completed 1000000 requests
Completed 2000000 requests
Completed 3000000 requests
Completed 4000000 requests
Completed 5000000 requests
Completed 6000000 requests
Finished 6988682 requests


Server Software: Apache/2.2.3
Server Hostname: localhost
Server Port: 80

Document Path: /helloworld.htm
Document Length: 12 bytes

Concurrency Level: 10
Time taken for tests: 60.022 seconds
Complete requests: 6988682
Failed requests: 0
Write errors: 0
Keep-Alive requests: 6988682
Total transferred: 2390129244 bytes
HTML transferred: 83864184 bytes
Requests per second: 116477.99 [#/sec] (mean)
Time per request: 0.086 [ms] (mean)
Time per request: 0.009 [ms] (mean, across all concurrent requests)
Transfer rate: 38901.82 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.0 0 0
Processing: 0 0 0.0 0 1
Waiting: 0 0 0.0 0 1
Total: 0 0 0.0 0 1

Percentage of the requests served within a certain time (ms)
50% 0
66% 0
75% 0
80% 0
90% 0
95% 0
98% 0
99% 0
100% 1 (longest request)


Additionally here is a run through php for a basic echo "hello world".

curl -i http://localhost/hw.php
HTTP/1.1 200 OK
Date: Fri, 09 Sep 2011 16:43:27 GMT
Server: Apache/2.2.3 (CentOS) PHP/5.3.6 mod_ssl/2.2.3 OpenSSL/0.9.8e-fips-rhel5
X-Powered-By: PHP/5.3.6
Content-Length: 12
Content-Type: text/html

hello world

ab -k -c 10 -t 60 -n 10000000 http://localhost/hw.php
This is ApacheBench, Version 2.0.40-dev <$Revision: 1.146 $> apache-2.0
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Copyright 2006 The Apache Software Foundation, http://www.apache.org/

Benchmarking localhost (be patient)
Completed 1000000 requests
Completed 2000000 requests
Completed 3000000 requests
Finished 3994856 requests


Server Software: Apache/2.2.3
Server Hostname: localhost
Server Port: 80

Document Path: /hw.php
Document Length: 12 bytes

Concurrency Level: 10
Time taken for tests: 60.029 seconds
Complete requests: 3994856
Failed requests: 0
Write errors: 0
Keep-Alive requests: 3994856
Total transferred: 1066626552 bytes
HTML transferred: 47938272 bytes
Requests per second: 66580.90 [#/sec] (mean)
Time per request: 0.150 [ms] (mean)
Time per request: 0.015 [ms] (mean, across all concurrent requests)
Transfer rate: 17360.44 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.0 0 0
Processing: 0 0 0.0 0 6
Waiting: 0 0 0.0 0 6
Total: 0 0 0.0 0 6

Percentage of the requests served within a certain time (ms)
50% 0
66% 0
75% 0
80% 0
90% 0
95% 0
98% 0
99% 0
100% 6 (longest request)




So yeah, ab -t timelimit , make sure you use it correctly. One more thing, most benchmarks, including this one, are wrong.

Author Info:
radek avatar

Member since Jan 1, 2000
218 articles

2 Comments
  • 1 0
 Excelent post!
It solved 2 problems I had:
- I was using the parameters wrong
- My IP wasn't reachable
  • 1 0
 Yes!







Copyright © 2000 - 2024. Pinkbike.com. All rights reserved.
dv56 0.041641
Mobile Version of Website