GithubHelp home page GithubHelp logo

giltene / wrk2 Goto Github PK

View Code? Open in Web Editor NEW
4.2K 4.2K 377.0 16.3 MB

A constant throughput, correct latency recording variant of wrk

License: Apache License 2.0

Makefile 0.71% Lua 1.93% C 96.95% C++ 0.42%

wrk2's Issues

Data discrepancy.Not getting correct data.Reqs/sec does not match the configuration

Check out the request which is being made-->
./wrk -t1 -c9 -d3s -R10 --latency http://test.com/sample/hello.jsp

Output-->
---------------------------------------------------------- 30 requests in 3.00s, 16.26KB read Requests/sec: 9.99 Transfer/sec: 5.41KB

As per the documentation, the total request made to the server should be 27?

Also in the below scenario-->
./wrk -t1 -c5 -d2s -R10 --latency http://test.com/sample/hello.jsp

16 requests in 2.03s, 8.67KB read
Requests/sec: 7.89
Transfer/sec: 4.28KB

How does the reqs/sec is >5?

Wrong latency distribution on the benchmark test

Dear all,
When I am using the wrk2 to do the http performance test, I found the latency distribution is quite puzzle as described below:

image
image

I have captured the http traffic with the wireshark, and I didn't find any traffic with latency greater than 1s.

image
image

WRK2 version: wrk 4.0.0 [kqueue] Copyright (C) 2012 Will Glozer
My commends: wrk2 -t 4 -c 10 -d 60s --rate 1000 -s header.lua --latency "$BaseUrl"

Can anyone tell me what's the problem here? Appreciated for your kindly help.

support rate in minutes

I'd like to use wrk2 to test an API that downloads a large file, I'd need to set -R in requests per minute to a very low value, like -R5m. Is it possible now or can it be supported?

built version

Is there a built file and can put into my shell path?

or perhaps somewhere to learn how to build this project or lua projects in general.

Looks like a great tool, would like it in my terminal

Is it possible to access global data from request function

In the setup.lua example script, is it possible for all threads to be able to access the threads table from the init(), request(), and response() functions? When I try to do so currently, the table is empty and has a different address for each thread. It seems only the setup() and done() functions can access this memory.

100% cpu

Has anyone else experienced wrk2's cpu usage shooting up to 100% after running for a while?

Failure to meet Rate

I do:
./wrk -t10 -c100 -d30s -R2000 --latency http://192.168.56.103/

however the rate is never more than 450 r/s

I can only achieve the desired rate for really low R values like 100 e.g.

The page I am hitting is static and fairly small (few Kb).

There are no socket errors or time-outs generated by the experiments.

I run ubuntu 16.4 with 4.4.0-31-generic kernel in vm with 4 cores and 10GB RAM.

Any chance that I have to enable or disable something for wrk2 to work properly?

Improvement - testing request

Hello,

I'm using wrk for testing of my apps regularly, and I find it awesome. One thing I'd like to be better is the testing request creation. I'm using the lua scripts for this, but could something like this be done?

Request:

echo -e 'GET /list HTTP/1.1\r\nContent-Type: application/json\r\nHost: localhost\r\nBODY DATA\r\n\r\n' | wrk http://localhost

This would allow me to craft any HTTP request (in any tool) and just copy&paste it to the terminal and be sure that wrk will create exactly this request for testing. Because sometimes when testing something behind Nginx proxy, instead of testing the app, you're actually testing the proxy, because it returns 302 Moved for some redirect. This is usually caused by incorrectly crafted request.

Do you think this would be possible, and worthwile doing?

Thanks!

wrk2 parameters and their usage

Wrk2 is a little foreign to me vs other load test tools due to its mix of parameter options:

  • number of threads
  • number of connections
  • rate (throughput or requests/sec)

What are typical good combinations to use, and is there a relationship pattern to specifying those parameters? e.g. what if I want to double load or scale it by X?

With jmeter, gatling, and other tools, you just specify # of users or threads or connections. No need to work out allocation of threads vs connections combination. And with wrk2, then you need to add on rate to the mix, wonder how that plays in to the # of threads & connections for maintaining rate.

Wish these two tools provided more of a writeup on how you specify these parameters in relation to each other.

I assume the naive approach to be similar to jmeter and gatling would be to match all 2-3 parameters to each other? e.g. do 100 threads with 100 connections and 100 as the rate (for wrk2). Otherwise, a different combination is simply optimizing how the client/generator produces or distributes the load? The latter not my specialty in figuring out load generation optimization (whether we hit CPU/mem/fd/connection limits on the generator side, and which combination produces the max load, seems like much trial & error).

How do most people use wrk and wrk2 in regards to defining the parameters?

how to construct put requests using wrk or wrk2?

I want to load testing put request with different files (10000 files locally). How do I construct put requests? The closest I found is POST.lua, but how do I construct PUT request to upload files?
Thank you!

Latency increases when threads > 1

Hi there,

I'm trying to understand why I would be seeing a jump in latency when the thread count is greater than 1.

On an c5.2xlarge ec2 instance with 4 physical CPUs (8 logical)

1 thread:

$ ./wrk -t1 -d30s -c100 -R100 http://myhost
Running 30s test @ http://myhost
  1 threads and 100 connections
  Thread calibration: mean lat.: 24.837ms, rate sampling interval: 57ms
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    23.23ms    4.27ms  49.63ms   84.20%
    Req/Sec    98.30     96.55   214.00     23.68%
  3001 requests in 30.02s, 832.31KB read
Requests/sec:     99.96
Transfer/sec:     27.72KB

4 threads:

$ ./wrk -t4 -d30s -c100 -R100 http://myhost
Running 30s test @ http://myhost
  4 threads and 100 connections
  Thread calibration: mean lat.: 189.007ms, rate sampling interval: 647ms
  Thread calibration: mean lat.: 341.843ms, rate sampling interval: 904ms
  Thread calibration: mean lat.: 336.302ms, rate sampling interval: 892ms
  Thread calibration: mean lat.: 343.629ms, rate sampling interval: 894ms
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   315.79ms  115.94ms 506.88ms   53.75%
    Req/Sec    24.92     10.10    38.00     70.83%
  2975 requests in 30.13s, 825.10KB read
Requests/sec:     98.74
Transfer/sec:     27.39KB

8 threads:

$ ./wrk -t8 -d30s -c100 -R100 http://myhost
Running 30s test @ http://myhost
  8 threads and 100 connections
  Thread calibration: mean lat.: 323.346ms, rate sampling interval: 902ms
  Thread calibration: mean lat.: 321.696ms, rate sampling interval: 901ms
  Thread calibration: mean lat.: 329.348ms, rate sampling interval: 895ms
  Thread calibration: mean lat.: 324.543ms, rate sampling interval: 912ms
  Thread calibration: mean lat.: 324.012ms, rate sampling interval: 907ms
  Thread calibration: mean lat.: 330.061ms, rate sampling interval: 910ms
  Thread calibration: mean lat.: 332.053ms, rate sampling interval: 914ms
  Thread calibration: mean lat.: 333.218ms, rate sampling interval: 910ms
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   335.85ms  129.11ms 488.96ms   79.56%
    Req/Sec    12.25      1.72    16.00     86.05%
  2994 requests in 30.06s, 830.37KB read
Requests/sec:     99.59
Transfer/sec:     27.62KB

I get similar results when I run the test for longer (5 mins). I observe the same latencies as measured by the server, so I wonder is there something different with the way multi-threaded load is generated?

Same rate, different connections count - RS times greatly differs - my wrong setup?

Have simple http endpoint, I'm the only person using it (no noise from other users - did tests several times).
What I notice is that running 2 tests with same rate, but different client connections makes significant change in response times.

If number of connections would be too low, I would assume the rate requirement not fulfilled.

Maybe it has something to do with thread calibrating - I know proper performance testing is hard, but cannot find a guide of how to setup parameters (threads, rate, connections) and how it's connected with Thread calibrating - anyone knows what I'm doing wrong?

Details:

When I run this:
wrk -t5 -c5 -d1m -R10 --latency -s ./my_script.lua http://somewhere/something
I get this:

  5 threads and 5 connections
  Thread calibration: mean lat.: 197.251ms, rate sampling interval: 524ms
  Thread calibration: mean lat.: 196.915ms, rate sampling interval: 494ms
  Thread calibration: mean lat.: 204.040ms, rate sampling interval: 598ms
  Thread calibration: mean lat.: 181.220ms, rate sampling interval: 448ms
  Thread calibration: mean lat.: 191.452ms, rate sampling interval: 474ms
...
 50.000%  189.95ms
 75.000%  228.61ms
 90.000%  263.93ms
 99.000%  340.99ms
 99.900%  395.26ms
 99.990%  395.26ms
 99.999%  395.26ms
100.000%  395.26ms
...
600 requests in 1.00m, 6.30MB read
Requests/sec:     10.00

When I run this:
wrk -t5 -c10 -d1m -R10 --latency -s ./my_script.lua http://somewhere/something
I get this:

  5 threads and 10 connections
  Thread calibration: mean lat.: 303.611ms, rate sampling interval: 995ms
  Thread calibration: mean lat.: 353.468ms, rate sampling interval: 960ms
  Thread calibration: mean lat.: 316.724ms, rate sampling interval: 1056ms
  Thread calibration: mean lat.: 347.817ms, rate sampling interval: 1013ms
  Thread calibration: mean lat.: 330.926ms, rate sampling interval: 994ms
...
 50.000%  359.93ms
 75.000%  453.12ms
 90.000%  539.65ms
 99.000%  619.01ms
 99.900%  673.79ms
 99.990%  673.79ms
 99.999%  673.79ms
100.000%  673.79ms
600 requests in 1.00m, 6.30MB read
Requests/sec:     10.00

Should number of connections

Catch up with upstream wrk

wrk2 was created in Nov. 2014 as an example of correcting coordinated omission in a load generator. It was basically a quick fork of wrk at the time, with minimal changes needed to achieve the purpose, created by @giletene and @mikeb01 as a result of a quick conversation at QCon SF.

The project turned out to be way more popular than I thought, or than originally intended. Wrk seems like a very solid base, but people looking for constant-rate capabilities and proper (not susceptible to coordinated omission) latency measurement seem to have picked up wrk2.

But since we had not put any real work into maintaining or enhancing wrk2 over the years, I'm sure wrk has added quite a bit in the 5 years since that we should simply "catch up on".

One simple way to do this is to follow Vizzini's directive and "go back to the beginning". Since applying the changes to wrk 5 years ago was "fairly simple" and since we had not strayed very far from the original work done in 2014, we can just pick up the latest/greatest wrk2, and apply the same logical changes to it that we did back then, to get an "up to date" wrk2 with all the wrk goodies. It took me and @mikeb01 only a few days to do it the first time, and applying it again "should" be even shorter... ;-)

I would prefer that we do this before starting to add any new features from PRs that have accumulated over the years, and any new features we want to add that are wrk2-specific (e.g. I'd really like to add a .hlog output support, which has long been part of hdrhistogram_c). Once we apply those PRs and additions, catching up with wrk will involve much more work...

Does anyone out there want to volunteer to do this "catch up to latest wrk" work?

p100 corrected > p100 uncorrected

IIRC, a corrected histogram injects additional values between the higher observed value and the expected interval. So it makes sense that the p100 of the uncorrected histogram should be the same as the p100 in the corrected version. I've played with HDRHistogram in java and it does appear to behave as I expect, but wrk2 sometimes produces corrected p100s much higher than the uncorrected version.

Final few lines of the corrected version:
2306.047 1.000000 691130114 447392431.11
2306.047 1.000000 691130114 536870912.00
2306.047 1.000000 691130114 596523251.36
2306.047 1.000000 691130114 671088630.00
2308.095 1.000000 691130115 766958458.78
2308.095 1.000000 691130115 inf

[Mean = 1.335, StdDeviation = 4.240]

[Max = 2306.048, Total count = 691130115]

[Buckets = 27, SubBuckets = 2048]

Final few lines of the uncorrected version:
201.471 1.000000 691130114 383479229.39
201.471 1.000000 691130114 447392431.11
201.471 1.000000 691130114 536870912.00
201.471 1.000000 691130114 596523251.36
201.471 1.000000 691130114 671088630.00
203.775 1.000000 691130115 766958458.78
203.775 1.000000 691130115 inf

[Mean = 0.483, StdDeviation = 0.758]

[Max = 203.648, Total count = 691130115]

[Buckets = 27, SubBuckets = 2048]

[The command line used was: wrk -t5 -c32 -d24h -R8000 -U -sSCRIPT URL DATAFILE
version string: wrk 4.0.0 [epoll] Copyright (C) 2012 Will Glozer]

Failed Installing wrk2 on Mac

I tried to install wrk2 on Mac (MacOS Mojave verison 10.14 (18A391)) via HomeBrew. Here is the output:

==> Installing wrk2 from jabley/wrk2
/usr/bin/sandbox-exec -f /private/tmp/homebrew20181011-62982-3usk4m.sb nice /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.7/bin/ruby -W0 -I /usr/local/Homebrew/Library/Homebrew/vendor/bundle-standalone/bundler/../ruby/2.3.0/gems/ruby-macho-2.1.0/lib:/usr/local/Homebrew/Library/Homebrew/vendor/bundle-standalone/bundler/../ruby/2.3.0/gems/plist-3.4.0/lib:/usr/local/Homebrew/Library/Homebrew/vendor/bundle-standalone/bundler/:/usr/local/Homebrew/Library/Homebrew/vendor/bundle-standalone/bundler/../ruby/2.3.0/gems/backports-3.11.4/lib:/usr/local/Homebrew/Library/Homebrew/vendor/bundle-standalone/bundler/../ruby/2.3.0/gems/activesupport-5.2.1/lib:/usr/local/Homebrew/Library/Homebrew/vendor/bundle-standalone/bundler/../ruby/2.3.0/gems/tzinfo-1.2.5/lib:/usr/local/Homebrew/Library/Homebrew/vendor/bundle-standalone/bundler/../ruby/2.3.0/gems/thread_safe-0.3.6/lib:/usr/local/Homebrew/Library/Homebrew/vendor/bundle-standalone/bundler/../ruby/2.3.0/gems/minitest-5.11.3/lib:/usr/local/Homebrew/Library/Homebrew/vendor/bundle-standalone/bundler/../ruby/2.3.0/gems/i18n-1.1.0/lib:/usr/local/Homebrew/Library/Homebrew/vendor/bundle-standalone/bundler/../ruby/2.3.0/gems/concurrent-ruby-1.0.5/lib:/usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.7/lib/ruby/gems/2.3.0/gems/did_you_mean-1.0.0/lib:/usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.7/lib/ruby/site_ruby/2.3.0:/usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.7/lib/ruby/site_ruby/2.3.0/x86_64-darwin9.0:/usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.7/lib/ruby/site_ruby/2.3.0/universal-darwin9.0:/usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.7/lib/ruby/site_ruby:/usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.7/lib/ruby/vendor_ruby/2.3.0:/usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.7/lib/ruby/vendor_ruby/2.3.0/x86_64-darwin9.0:/usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.7/lib/ruby/vendor_ruby/2.3.0/universal-darwin9.0:/usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.7/lib/ruby/vendor_ruby:/usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.7/lib/ruby/2.3.0:/usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.7/lib/ruby/2.3.0/x86_64-darwin9.0:/usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.7/lib/ruby/2.3.0/universal-darwin9.0:/usr/local/Homebrew/Library/Homebrew -- /usr/local/Homebrew/Library/Homebrew/build.rb /usr/local/Homebrew/Library/Taps/jabley/homebrew-wrk2/wrk2.rb --verbose --HEAD
==> Cloning https://github.com/giltene/wrk2.git
Updating /Users/satrioadip/Library/Caches/Homebrew/wrk2--git
git config remote.origin.url https://github.com/giltene/wrk2.git
git config remote.origin.fetch \+refs/heads/master:refs/remotes/origin/master
git fetch origin
==> Checking out branch master
git checkout -f master --
Already on 'master'
Your branch is up to date with 'origin/master'.
git reset --hard origin/master
HEAD is now at e0109df Merge pull request #18 from alex-koturanov/master
cp -pR /Users/satrioadip/Library/Caches/Homebrew/wrk2--git/SCRIPTING /private/tmp/d20181011-62984-h35xj6/SCRIPTING
cp -pR /Users/satrioadip/Library/Caches/Homebrew/wrk2--git/LICENSE /private/tmp/d20181011-62984-h35xj6/LICENSE
cp -pR /Users/satrioadip/Library/Caches/Homebrew/wrk2--git/Makefile /private/tmp/d20181011-62984-h35xj6/Makefile
cp -pR /Users/satrioadip/Library/Caches/Homebrew/wrk2--git/NOTICE /private/tmp/d20181011-62984-h35xj6/NOTICE
cp -pR /Users/satrioadip/Library/Caches/Homebrew/wrk2--git/README.md /private/tmp/d20181011-62984-h35xj6/README.md
cp -pR /Users/satrioadip/Library/Caches/Homebrew/wrk2--git/.gitignore /private/tmp/d20181011-62984-h35xj6/.gitignore
cp -pR /Users/satrioadip/Library/Caches/Homebrew/wrk2--git/scripts/. /private/tmp/d20181011-62984-h35xj6/scripts
cp -pR /Users/satrioadip/Library/Caches/Homebrew/wrk2--git/deps/. /private/tmp/d20181011-62984-h35xj6/deps
cp -pR /Users/satrioadip/Library/Caches/Homebrew/wrk2--git/CoordinatedOmission/. /private/tmp/d20181011-62984-h35xj6/CoordinatedOmission
cp -pR /Users/satrioadip/Library/Caches/Homebrew/wrk2--git/.git/. /private/tmp/d20181011-62984-h35xj6/.git
cp -pR /Users/satrioadip/Library/Caches/Homebrew/wrk2--git/src/. /private/tmp/d20181011-62984-h35xj6/src
cp -pR /private/tmp/d20181011-62984-h35xj6/SCRIPTING /private/tmp/wrk2-20181011-62984-awtwi9/SCRIPTING
cp -pR /private/tmp/d20181011-62984-h35xj6/LICENSE /private/tmp/wrk2-20181011-62984-awtwi9/LICENSE
cp -pR /private/tmp/d20181011-62984-h35xj6/Makefile /private/tmp/wrk2-20181011-62984-awtwi9/Makefile
cp -pR /private/tmp/d20181011-62984-h35xj6/NOTICE /private/tmp/wrk2-20181011-62984-awtwi9/NOTICE
cp -pR /private/tmp/d20181011-62984-h35xj6/README.md /private/tmp/wrk2-20181011-62984-awtwi9/README.md
cp -pR /private/tmp/d20181011-62984-h35xj6/.gitignore /private/tmp/wrk2-20181011-62984-awtwi9/.gitignore
cp -pR /private/tmp/d20181011-62984-h35xj6/scripts/. /private/tmp/wrk2-20181011-62984-awtwi9/scripts
cp -pR /private/tmp/d20181011-62984-h35xj6/deps/. /private/tmp/wrk2-20181011-62984-awtwi9/deps
cp -pR /private/tmp/d20181011-62984-h35xj6/CoordinatedOmission/. /private/tmp/wrk2-20181011-62984-awtwi9/CoordinatedOmission
cp -pR /private/tmp/d20181011-62984-h35xj6/.git/. /private/tmp/wrk2-20181011-62984-awtwi9/.git
cp -pR /private/tmp/d20181011-62984-h35xj6/src/. /private/tmp/wrk2-20181011-62984-awtwi9/src
chmod -Rf +w /private/tmp/d20181011-62984-h35xj6
==> make
Building LuaJIT...
HOSTCC    host/minilua.o
HOSTCC    host/buildvm_asm.o
HOSTCC    host/buildvm_peobj.o
HOSTCC    host/buildvm_lib.o
HOSTCC    host/buildvm_fold.o
CC        lj_gc.o
CC        lj_char.o
CC        lj_obj.o
CC        lj_str.o
CC        lj_tab.o
CC        lj_func.o
CC        lj_udata.o
CC        lj_meta.o
CC        lj_debug.o
CC        lj_state.o
CC        lj_vmevent.o
CC        lj_vmmath.o
CC        lj_strscan.o
CC        lj_api.o
CC        lj_lex.o
CC        lj_parse.o
CC        lj_bcread.o
CC        lj_bcwrite.o
CC        lj_load.o
CC        lj_ir.o
CC        lj_opt_mem.o
CC        lj_opt_narrow.o
CC        lj_opt_dce.o
CC        lj_opt_loop.o
CC        lj_opt_split.o
CC        lj_opt_sink.o
CC        lj_mcode.o
CC        lj_snap.o
CC        lj_asm.o
CC        lj_trace.o
CC        lj_gdbjit.o
CC        lj_ctype.o
CC        lj_cdata.o
CC        lj_cconv.o
CC        lj_ccall.o
CC        lj_ccallback.o
CC        lj_carith.o
CC        lj_clib.o
CC        lj_cparse.o
CC        lj_lib.o
CC        lj_alloc.o
CC        lib_aux.o
CC        lib_package.o
CC        lib_init.o
CC        luajit.o
HOSTLINK  host/minilua
ld: library not found for -lgcc_s.10.4
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[1]: *** [host/minilua] Error 1
make: *** [deps/luajit/src/libluajit.a] Error 2

==> Formula
Tap: jabley/wrk2
Path: /usr/local/Homebrew/Library/Taps/jabley/homebrew-wrk2/wrk2.rb
==> Configuration
HOMEBREW_VERSION: 1.7.7
ORIGIN: https://github.com/Homebrew/brew
HEAD: c54a657cd5987cba2718f7012a55101324fde8b1
Last commit: 3 days ago
Core tap ORIGIN: https://github.com/Homebrew/homebrew-core
Core tap HEAD: 683982a01204ddce1165daff00efc129d2e11adb
Core tap last commit: 12 hours ago
HOMEBREW_PREFIX: /usr/local
HOMEBREW_ENABLE_AUTO_UPDATE_MIGRATION: 1
CPU: octa-core 64-bit skylake
Homebrew Ruby: 2.3.7 => /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.3.7/bin/ruby
Clang: 10.0 build 1000
Git: 2.17.1 => /Library/Developer/CommandLineTools/usr/bin/git
Curl: 7.54.0 => /usr/bin/curl
Java: 1.8.0_172
macOS: 10.14-x86_64
CLT: 10.0.0.0.1.1535735448
Xcode: N/A
==> ENV
HOMEBREW_CC: clang
HOMEBREW_CXX: clang++
MAKEFLAGS: -j8
CMAKE_PREFIX_PATH: /usr/local/opt/openssl:/usr/local
CMAKE_INCLUDE_PATH: /Library/Developer/CommandLineTools/SDKs/MacOSX10.14.sdk/usr/include/libxml2:/Library/Developer/CommandLineTools/SDKs/MacOSX10.14.sdk/System/Library/Frameworks/OpenGL.framework/Versions/Current/Headers
CMAKE_LIBRARY_PATH: /Library/Developer/CommandLineTools/SDKs/MacOSX10.14.sdk/System/Library/Frameworks/OpenGL.framework/Versions/Current/Libraries
PKG_CONFIG_PATH: /usr/local/opt/openssl/lib/pkgconfig
PKG_CONFIG_LIBDIR: /usr/lib/pkgconfig:/usr/local/Homebrew/Library/Homebrew/os/mac/pkgconfig/10.14
HOMEBREW_GIT: git
HOMEBREW_SDKROOT: /Library/Developer/CommandLineTools/SDKs/MacOSX10.14.sdk
ACLOCAL_PATH: /usr/local/share/aclocal
PATH: /usr/local/Homebrew/Library/Homebrew/shims/mac/super:/usr/local/opt/openssl/bin:/usr/bin:/bin:/usr/sbin:/sbin

Error: jabley/wrk2/wrk2 HEAD-e0109df did not build
Logs:
     /Users/satrioadip/Library/Logs/Homebrew/wrk2/01.make
     /Users/satrioadip/Library/Logs/Homebrew/wrk2/00.options.out
     /Users/satrioadip/Library/Logs/Homebrew/wrk2/01.make.cc
If reporting this issue please do so at (not Homebrew/brew or Homebrew/core):
https://github.com/jabley/homebrew-wrk2/issues

I also tried to build from source, here is the output:

Building LuaJIT...
HOSTCC    host/minilua.o
HOSTLINK  host/minilua
ld: warning: directory not found for option '-L/usr/local/Cellar/gsl/1.16/lib/'
ld: library not found for -lgcc_s.10.4
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[1]: *** [host/minilua] Error 1
make: *** [deps/luajit/src/libluajit.a] Error 2

Is there any solution for this? Thank you.

URL's with underscores are considered invalid.

Hi,

I noticed today that URLs with underscores are seen as invalid:

» wrk2 -c 100 -s test.lua -H "x-openrtb-version: 2.2" "http://smadexeast.rubicon.endpoints.ntoggle.com:8000/supply-partners/rubicon" -R 1000 -d 10
Running 10s test @ http://smadexeast.rubicon.endpoints.ntoggle.com:8000/supply-partners/rubicon
  2 threads and 100 connections
^C  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    34.39ms   16.09ms 242.56ms   85.53%
    Req/Sec       -nan      -nan   0.00      0.00%
  4147 requests in 4.25s, 4.49MB read
Requests/sec:    975.63
Transfer/sec:      1.06MB
[I] apenney at arya in ~
» wrk2 -c 100 -s test.lua -H "x-openrtb-version: 2.2" "http://smadex_east.rubicon.endpoints.ntoggle.com:8000/supply-partners/rubicon" -R 1000 -d 10
invalid URL: http://smadex_east.rubicon.endpoints.ntoggle.com:8000/supply-partners/rubicon
Usage: wrk <options> <url>
  Options:
    -c, --connections <N>  Connections to keep open
    -d, --duration    <T>  Duration of test
    -t, --threads     <N>  Number of threads to use

    -s, --script      <S>  Load Lua script file
    -H, --header      <H>  Add header to request
    -L  --latency          Print latency statistics
    -U  --u_latency        Print uncorrceted latency statistics
        --timeout     <T>  Socket/request timeout
    -B, --batch_latency    Measure latency of whole
                           batches of pipelined ops
                           (as opposed to each op)
    -v, --version          Print version details
    -R, --rate        <T>  work rate (throughput)
                           in requests/sec (total)
                           [Required Parameter]


  Numeric arguments may include a SI unit (1k, 1M, 1G)
  Time arguments may include a time unit (2s, 2m, 2h)
[I] apenney at arya in ~
»

The http_parser.c code seems to allow _'s so I'm not really sure where to start with fixing this. This was tested against master.

ssl context

seems like the code from the original (wrk) - does setup a ssl context in (wrk.c) but the implementation is incomplete as it does not load a cert

latency vs u_latency

I dont understand the reasoning behind the recorded and uncorrected response times. I created a simple case where the server delays 1 sec.

I started wrk with 1 thread and 1 connection and with a target rate of 5 tps.

Obviously the rate cant be achieved but the response time is going to be 1 sec. When I run for 30 secs I find that the avg response was reported as 16 secs and the rate was indeed only 1 tps.

I dont understand why it would be 16 secs. The server is guaranteed to return in 1 sec and if I print out the u_latency it is indeed 1 sec. What is the reasoning behind the corrected values? We have a fork of this repo and we dump stats on intervals and the behavior is that the avg keeps forever increasing when the server cant keep up with the clients rate. I find that the value is not useful and is miss leading. The rate not meeting target is enough to tell me the server cant keep up.

I made a local change to the code to use u_latency vs latency for the summary stats and it is what I could expect to see. I would like an option select which to see in the summary status. Maybe an option for --u_latency and --latency and another for --details or something along that line. Is that something that would be of interest to you?

Problems when threads > rate

There seems to be an issue with wrk2 when number of threads specified (-t) is higher than rate (-R). wrk2 seems to be locking up.

Does `-c100 -R100` means "100 connections at 1 RPS", or "100 connections at random RPS" ?

I have a web app exposing an API which will be rate limited at 1 request per second per IP.
That being said I need to benchmark the maximum simultaneous users my app is able to absorb while keeping a P99 latency under 700ms.

In my understanding, wrk -t4 -c10000 -d70 -R 10000 -L http://localhost:8080 would translate to: using 4 threads, create 10.000 connections requesting localhost:8080 at 10.000 requests/s ALL TOGETHER, during 60 seconds (70-10s for initialization).

Is my understanding correct ? or will wrk2 RE-use some connections to issue more than 1 RPS in some of them ? for example, on the 10000 connections opened, is it possible that only 1 would actually be sending 10000 RPS while the 9999 others would stay opened without requests or something similar ?

EDIT
Just adding some illustration.
Does this wrk -t4 -c3 -d70 -R 3 -L http://localhost:8080 means:

option 1:

           |<  1 second  >|
client #1  |  1 request   |
client #2  |  1 request   |
client #3  |  1 request   |

or

option 2:

           |<  1 second  >|
client #1  |  2 request   |
client #2  |              |
client #3  |  1 request   |

Where get info about format of output wrk2?

Sorry if this place not for this questipn. I dont found you email. I dont understand what is 1/(1-Percentile) formula in report (--latency). And what is "rate sampling interval" in thread calibration strings at beginning.

Unable to pull table from thread

Hi,
I am unable to pull table set in thread in done function. Content is added table in the response phase (response) function

I tried thread:get("table name").I get an empty table.

Can not build on OSX

First of all I tried make

~/src/wrk2 (master) → make
CC src/wrk.c
In file included from src/wrk.c:3:
src/wrk.h:11:10: fatal error: 'openssl/ssl.h' file not found
#include <openssl/ssl.h>
         ^
1 error generated.
make: *** [obj/wrk.o] Error 1

Ok, fine, we know how to fix that on mac

CFLAGS += -I/usr/local/include
 ~/src/wrk2 (master) → make
CC src/script.c
src/script.c:29:37: error: array has incomplete element type 'const struct luaL_reg'
static const struct luaL_reg addrlib[] = {
                                    ^
src/script.c:29:21: note: forward declaration of 'struct luaL_reg'
static const struct luaL_reg addrlib[] = {
                    ^
src/script.c:35:38: error: array has incomplete element type 'const struct luaL_reg'
static const struct luaL_reg statslib[] = {
                                     ^
src/script.c:29:21: note: forward declaration of 'struct luaL_reg'
static const struct luaL_reg addrlib[] = {
                    ^
src/script.c:41:39: error: array has incomplete element type 'const struct luaL_reg'
static const struct luaL_reg threadlib[] = {
                                      ^
src/script.c:29:21: note: forward declaration of 'struct luaL_reg'
static const struct luaL_reg addrlib[] = {
                    ^
src/script.c:53:5: warning: implicit declaration of function 'luaL_register' is invalid in C99 [-Wimplicit-function-declaration]
    luaL_register(L, NULL, addrlib);
    ^
src/script.c:109:20: warning: implicit declaration of function 'lua_objlen' is invalid in C99 [-Wimplicit-function-declaration]
    size_t count = lua_objlen(L, -1);
                   ^
2 warnings and 3 errors generated.
make: *** [obj/script.o] Error 1

I have no idea how to fix that. Could you please help?

startup causing what appear to be false timeouts

To troubleshoot, I am using a server that returns after a 1 sec sleep. The payload of the response something like below where health.count is a Java AtomicInteger value that is incremented every time the resource is invoked:

{"status":"UP","customHealthCheck":{"status":"UP","health.count":8}

My wrk command line is:

wrk -s chip.lua -R 1 -t 1 -c 10 -d 30 http://localhost:8080/cc-auth-gateway/health

The lua script is simple:

function init(args)
requests = 0
responses = 0
wrk.method = 'GET'
wrk.headers["Api-Key"]="RTM";wrk.headers["User-Id"]="TEST";wrk.headers["Accept"]="application/json;v=1";wrk.headers["Content-Type"]="application/json;v=1";
end

function response(status, headers, body)
--print(status, body)
end

I put the line labeled "stopped" in the response_completed(). I put the "timeout" in check_timeout().
The "stopped" shows the diff between c->start and now which is way less than a sec. It also shows the body being the same for nearly all of the connections.

In addition the health.count is identical. If it had really called my server the health.count would have shown an incremental count value.

The server logs showed that the endpoint was called only once and not the 10 times that would have been expected.

..
..

Running 30s test @ http://localhost:8080/cc-auth-gateway/health
1 threads and 10 connections
stopped: conn: 0x7fe4c50020f8 now:1522155115404266 start:1522155115402504 diff:1762 status: 200 parser: 0x7fe4c5002100 body: 0x7fe4c5800400 body:{"status":"UP","customHealthCheck":{"status":"UP","health.count":13},"diskSpace"
stopped: conn: 0x7fe4c50041f0 now:1522155115409766 start:1522155115408162 diff:1604 status: 200 parser: 0x7fe4c50041f8 body: 0x7fe4c6000400 body:{"status":"UP","customHealthCheck":{"status":"UP","health.count":13},"diskSpace"
stopped: conn: 0x7fe4c50062e8 now:1522155115415657 start:1522155115413599 diff:2058 status: 200 parser: 0x7fe4c50062f0 body: 0x7fe4c6000c00 body:{"status":"UP","customHealthCheck":{"status":"UP","health.count":13},"diskSpace"
stopped: conn: 0x7fe4c50083e0 now:1522155115419798 start:1522155115418591 diff:1207 status: 200 parser: 0x7fe4c50083e8 body: 0x7fe4c5800c00 body:{"status":"UP","customHealthCheck":{"status":"UP","health.count":13},"diskSpace"
stopped: conn: 0x7fe4c500a4d8 now:1522155115424255 start:1522155115423011 diff:1244 status: 200 parser: 0x7fe4c500a4e0 body: 0x7fe4c5014e00 body:{"status":"UP","customHealthCheck":{"status":"UP","health.count":13},"diskSpace"
stopped: conn: 0x7fe4c500c5d0 now:1522155115429655 start:1522155115428230 diff:1425 status: 200 parser: 0x7fe4c500c5d8 body: 0x7fe4c5015200 body:{"status":"UP","customHealthCheck":{"status":"UP","health.count":13},"diskSpace"
stopped: conn: 0x7fe4c500e6c8 now:1522155115435156 start:1522155115433592 diff:1564 status: 200 parser: 0x7fe4c500e6d0 body: 0x7fe4c3013200 body:{"status":"UP","customHealthCheck":{"status":"UP","health.count":13},"diskSpace"
stopped: conn: 0x7fe4c50107c0 now:1522155115438764 start:1522155115437429 diff:1335 status: 200 parser: 0x7fe4c50107c8 body: 0x7fe4c3013a00 body:{"status":"UP","customHealthCheck":{"status":"UP","health.count":13},"diskSpace"
stopped: conn: 0x7fe4c50128b8 now:1522155115445146 start:1522155115443150 diff:1996 status: 200 parser: 0x7fe4c50128c0 body: 0x7fe4c6001400 body:{"status":"UP","customHealthCheck":{"status":"UP","health.count":13},"diskSpace"
stopped: conn: 0x7fe4c5000000 now:1522155116411484 start:1522155115397215 diff:1014269 status: 200 parser: 0x7fe4c5000008 body: 0x7fe4c6800800 body:{"status":"UP","customHealthCheck":{"status":"UP","health.count":14},"diskSpace"
timeout conn:0 connp: 0x7fe4c5000000 now:1522155117447507 start:1522155115397215 diff:2050292
timeout conn:1 connp: 0x7fe4c50020f8 now:1522155117447507 start:1522155115402504 diff:2045003
timeout conn:2 connp: 0x7fe4c50041f0 now:1522155117447507 start:1522155115408162 diff:2039345
timeout conn:3 connp: 0x7fe4c50062e8 now:1522155117447507 start:1522155115413599 diff:2033908

Assertion failure while running benchmark

I get this error on running wrk2:

    Benchmark:
      ++++++++++++++++++++
      100Req/s Duration:300s open connections:20
      wrk2: src/hdr_histogram.c:54: counts_index: Assertion `bucket_index < h->bucket_count' failed.

I'm not sure why this is happening. If I run it again, it seems to work sometimes, but then fails again arbitrarily. I'm running this inside a docker container if that could be an issue.

Output format options

Hello! Are there any options to generate computer-readable reports? I.e. JSON, YAML or, at least, CSV :)

Compile errors (dereferencing pointer to incomplete type) since wrk 4 backport

Hey,
I switched to the fork because I needed the -R option. Since commit 9e6583f (Backport of wrk 4 changes) it won't compile.
Error is "error: dereferencing pointer to incomplete type" for everything that's related to the struct addrinfo. It seems it's provided directly by lua. I tried to hunt it down but my time is limited at the moment. Now I work with the comment before the backport and it works just fine.
Thanks!

Number of created connections sometimes is greater by 1 than specified at command line

Testing code (NodeJS 6)

const express = require('express');
const logger = require('log4js').getLogger();
const uid   = require('uid');
const assert = require('assert');
var app = express();

var reqCnt = {};
app.get('/', (req,res)=>{
    logger.debug(`Request, socket id=${req.connection.uid}`);
    assert.notEqual(reqCnt[req.connection.uid], undefined);
    reqCnt[req.connection.uid]++;
    res.send('ok');
});

setInterval( ()=>{
  var uids = Object.keys(reqCnt);
  //logger.debug(`Connection number: ${uids.length}`);
  for ( var uid of uids ) {
    //logger.debug(`${uid}: ${reqCnt[uid]} req/s`);
    reqCnt[uid] = 0;
  }
}, 1000);

var httpSrv = app.listen(8080);
httpSrv
    .on('connection', function(socket) {
      socket.uid = uid();
      reqCnt[socket.uid] = 0; // zero requests on that connection
      logger.debug(`connection is established: id=${socket.uid}, local: ${socket.localAddress}:${socket.localPort}, remote: ${socket.remoteAddress}:${socket.remotePort}`);
      logger.debug(socket.address());
      socket
      .on('data', function(data) {
          //logger.debug(`id=${socket.uid}, Data: unshown`);
      })
      .on('timeout', function() {
        logger.debug(`socket timeout: local: ${socket.localAddress}:${socket.localPort}, remote: ${socket.remoteAddress}:${socket.remotePort}`);
      })
      .on('close', function() {
        //console.log(socket);
        logger.debug(`socket closed: id=${socket.uid}, local: ${socket.localAddress}:${socket.localPort}, remote: ${socket.remoteAddress}:${socket.remotePort}`);
      });
    });

Running wrk2 with

./wrk -d 10 -t 2 -c 8 -R 20 http://localhost:8080

Output of

> node server.js | grep established
morpher@morpher-Inspiron-5447 ~/Documents/test_sockets $ node server.js   | grep established
[2016-08-22 09:57:14.123] [DEBUG] [default] - connection is established: id=wrpb37o, local: ::ffff:127.0.0.1:8080, remote: ::ffff:127.0.0.1:44399
[2016-08-22 09:57:14.161] [DEBUG] [default] - connection is established: id=wdy8o3m, local: ::ffff:127.0.0.1:8080, remote: ::ffff:127.0.0.1:44400
[2016-08-22 09:57:14.162] [DEBUG] [default] - connection is established: id=i5tqom2, local: ::ffff:127.0.0.1:8080, remote: ::ffff:127.0.0.1:44401
[2016-08-22 09:57:14.162] [DEBUG] [default] - connection is established: id=eedh7a2, local: ::ffff:127.0.0.1:8080, remote: ::ffff:127.0.0.1:44402
[2016-08-22 09:57:14.163] [DEBUG] [default] - connection is established: id=op3an49, local: ::ffff:127.0.0.1:8080, remote: ::ffff:127.0.0.1:44403
[2016-08-22 09:57:14.163] [DEBUG] [default] - connection is established: id=h1o8a37, local: ::ffff:127.0.0.1:8080, remote: ::ffff:127.0.0.1:44404
[2016-08-22 09:57:14.163] [DEBUG] [default] - connection is established: id=sqr6t8d, local: ::ffff:127.0.0.1:8080, remote: ::ffff:127.0.0.1:44405
[2016-08-22 09:57:14.164] [DEBUG] [default] - connection is established: id=unk79u6, local: ::ffff:127.0.0.1:8080, remote: ::ffff:127.0.0.1:44406
[2016-08-22 09:57:14.164] [DEBUG] [default] - connection is established: id=m7g0l5b, local: ::ffff:127.0.0.1:8080, remote: ::ffff:127.0.0.1:44407

There are 9 connections although I specified 8 connections.
Currently switched to apache benchmarking (ab) - it passes the test successfully.

Startup issues a burst of requests causing timeouts

I am experiencing timeouts at the beginning of my test particularly as the number of threads is high and if the response time is in the 500 ms range. I created a small test and set threads to 10 and connection to 50 and the rate of 1 tps and forced my backend to response at 1 sec latency. I put a print() in my lua response() handler. What I notice is that all 50 connections send the request at startup which exceeds the 1 tps and since the response are so large it generates many timeouts.

invalid option --R

Using the command from readme file:

wrk -t2 -c100 -d15s -R2000 http://127.0.0.1:8000/

Output:

wrk: invalid option -- R
Usage: wrk <options> <url>                            
  Options:                                            
    -c, --connections <N>  Connections to keep open   
    -d, --duration    <T>  Duration of test           
    -t, --threads     <N>  Number of threads to use   
                                                      
    -s, --script      <S>  Load Lua script file       
    -H, --header      <H>  Add header to request      
        --latency          Print latency statistics   
        --timeout     <T>  Socket/request timeout     
    -v, --version          Print version details      
                                                      
  Numeric arguments may include a SI unit (1k, 1M, 1G)
  Time arguments may include a time unit (2s, 2m, 2h)

send advanced http request and capture response using wrk/wrk2

Hi
I am trying to use wrk to soak my REST API interface in a product.
As part of the I am trying to send the wrk request for the below request (given at the bottom).
I am not able to set the –d options properly via json file or wrk array.
Will you be able to provide me proper wrk array format or Jason format.
I am NOT planning to use docker. As my server is not standalone web server. Its part of the product.

also I want to see the response to validate it.

Appreciate any help.
Thanks
M
Curl request taht want to send via wrk.
curl -k -i -d '{"jsonrpc":"2.0","id":101,"method":"ProvisionSubscriberInfo", "params":{"info": {"thing1": "@object_1"},"subId": 911}}' -X POST https://TPC-D7-07-001.phaedrus.sandvine.com/policyengine/v1/functions/ -H 'Authorization:

Basic Usage Example Fails

This is the example given for basic usage:

wrk -t2 -c100 -d30s -R2000 http://127.0.0.1:8080/index.html

I believe wrk needs to be replaced with wrk2. At least, after installing this with homebrew, wrk2 was the name of the executable that got installed.

confusing results when run with -H 'Connection: Close' for non-keepAlive http benchmark

Thanks for this excellent tools! We can use it get more accurate records about latency.
But when we try to use wrk2 do non-keepAlive http benchmark the results are confusing. e.g.

wrk2 -c 32  -t 16 -d 60s -R 60000 -H 'Connection: Close' http://127.0.0.1:8082
Running 1m test @ http://127.0.0.1:8082
  16 threads and 32 connections
  Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 9223372036854776.000ms, rate sampling interval: 10ms
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     -nanus    -nanus   0.00us    0.00%
    Req/Sec     0.00      0.00     0.00    100.00%
  0 requests in 1.00m, 2.05GB read
  Socket errors: connect 0, read 1675550, write 0, timeout 0
Requests/sec:      0.00
Transfer/sec:     35.01MB

For this real example it is a jetty web server which listens on port 8082.

Deadlock if -R is < -t

All in the title. The program freeze if -R is smaller than -t. I can imagine why it happen but that should be catched.

Incorrect Reqs/Sec when using pipeline script

See the command used -->
./wrk -t1 -c1 -d1s -R1 --latency http://test.com/sample/hello.jsp -s pipeline_more.lua -- /100

Output

35 requests in 1.17s, 18.99KB read Requests/sec: 29.90 Transfer/sec: 16.22KB

Logs captured on the server shows the count as given below-->
`[test logs]$ wc -l localhost_access_log.2017-09-07.txt
60 localhost_access_log.2017-09-07.txt
[oqe@test logs]$

Also if we double the concurrent http connection in the above command ie, c=2 we get twice the request captured in the server logs as well as in the wrk data-->

./wrk -t1 -c2 -d1s -R1 --latency http://test.com/sample/hello.jsp -s pipeline_more.lua -- /100

Output

62 requests in 1.01s, 34.27KB read Requests/sec: 61.53 Transfer/sec: 34.01KB

Logs captured on the server shows the count as given below-->
[test logs]$ wc -l localhost_access_log.2017-09-07.txt
120 localhost_access_log.2017-09-07.txt
[oqe@test logs]$

This means that the wrk2 logging is capturing half of the request being made to the server.

Improve aeTimeEvent resolution from 1 msec to 1 usec

One of the valid "nits" with wrk2 is that it can "over-report" latencies by up to 1msec because the rate-limiting model uses the call:
aeCreateTimeEvent(thread->loop, msec_to_wait, delay_request, c, NULL);
to wait before sending a request if "its time has not yet come". Because of the 1msec resolution of the ae async framework's aeTimeEvent and aeCreateTimeEvent, this can end up "oversleeping" by up to a millisecond, which ends up looking like a server problem when it is actually a load generator problem.

And the approach of "forgiving up to 1msec" is not a good one, as such an approach would miss real issues. IMO it is better to report pessimistic (could be somewhat worse than reality) latency numbers than ones that are better than reality.

But modern *nix variants can deal with clocks at a much finer resolution than 1msec (with e.g. nanosleep(), and timerfd), and the events should really be using a much finer resolution (e.g. 10-20 usec would not be unreasonable).

The really cool code in ae.c and friends appear to have originated from redis, and have not been touched in "forever". I'd like to work to improve the basic aeTimeEvent in that framework to include microsecond resolution information, along with a configurable quantum for actual time event resolution chunking.

The approach I'd take would probably keep the current external APIs (e.g. aeCreateTimeEvent which takes a 1-msec-unit time parameter) and all the current fields in e.g. aeTimeEvent (including when_sec and when_ms), but add an additional when_usec field for the optional microseconds-within-the-millisecond amount (defaults to 0) that some APIs may supply. We would then add additional APIs for those who want finer resolution (e.g. aeCreateTimeEventUsec(), aeWaitUsec(), aeGetTimeUsec()). We would change the underlying implementations that currently populate and use struct timevals (like aeProcessEvents(), aeApiPoll()), which already supports microsecond resolution, to correctly populate and usec-resolution information, and will use a timerfd to support sub-millisecond-resolution timing in epoll_wait() rather than rely on the timeout parameter.

The benefit of all this will be much more timely wakeups for delayed requests and a less pessimistic reporting of sub-millisecond response time levels, and better per-thread handling when >1000/sec/thread requests rates are actually possible.

wrk.lookup needs to gracefully handle errors

This came up testing a proxy that uses dns results to control load. When bandwidth reaches a software limit, dns queries switch to a "noroute" C record. The lua script would do a dns query every Nth request to see if the proxy had gone into denying requests.

The problem is that wrk.lookup immediately causes wrk 2 to terminate upon failing to get a connectable dns result.

It would be better if wrk.lookup would return a status code to indicate the lookup failure instead of terminating everything. I've currently hacked script.c to return a status code instead of terminating.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.