GithubHelp home page GithubHelp logo

Comments (12)

xfeep avatar xfeep commented on July 21, 2024

We also try small number of connections and requests

wrk2 -c 16  -t 16 -d 60s -R 16 -H 'Connection: Close' http://127.0.0.1:8082

The result is not changed.

from wrk2.

giltene avatar giltene commented on July 21, 2024

Looks like lots of read errors, and zero successful requests. Since there are no successful requests, there is no latency information...

What does the original wrk (as opposed to wrk) produce for the same thing (when used without the -R flag)?

For comparison, I get this on my mac against the built-in apache server:

Lumpy.local-21% wrk -c 16 -t 16 -d 60s -R 16 -H 'Connection: Close' http://127.0.0.1:80/index.html
Running 1m test @ http://127.0.0.1:80/index.html
16 threads and 16 connections
Thread calibration: mean lat.: 5.317ms, rate sampling interval: 14ms
Thread calibration: mean lat.: 5.414ms, rate sampling interval: 14ms
Thread calibration: mean lat.: 5.878ms, rate sampling interval: 15ms
Thread calibration: mean lat.: 5.830ms, rate sampling interval: 15ms
Thread calibration: mean lat.: 5.738ms, rate sampling interval: 14ms
Thread calibration: mean lat.: 5.083ms, rate sampling interval: 13ms
Thread calibration: mean lat.: 5.370ms, rate sampling interval: 14ms
Thread calibration: mean lat.: 4.378ms, rate sampling interval: 11ms
Thread calibration: mean lat.: 4.105ms, rate sampling interval: 10ms
Thread calibration: mean lat.: 4.178ms, rate sampling interval: 10ms
Thread calibration: mean lat.: 4.758ms, rate sampling interval: 12ms
Thread calibration: mean lat.: 3.883ms, rate sampling interval: 10ms
Thread calibration: mean lat.: 3.874ms, rate sampling interval: 10ms
Thread calibration: mean lat.: 4.266ms, rate sampling interval: 11ms
Thread calibration: mean lat.: 4.303ms, rate sampling interval: 11ms
Thread calibration: mean lat.: 3.712ms, rate sampling interval: 10ms
Thread Stats Avg Stdev Max +/- Stdev
Latency 2.13ms 1.00ms 7.90ms 74.56%
Req/Sec 1.12 9.59 111.00 98.62%
976 requests in 1.00m, 347.89KB read
Requests/sec: 16.27
Transfer/sec: 5.80KB

from wrk2.

xfeep avatar xfeep commented on July 21, 2024

Maybe it is a bug of original wrk or jetty 7 because when we use only 1 connection it still get no successful requests.

 wrk -c 1  -t 1 -d 10s  -H 'Connection: Close' http://127.0.0.1:8082
Running 10s test @ http://127.0.0.1:8082
  1 threads and 1 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     0.00us    0.00us   0.00us    -nan%
    Req/Sec     0.00      0.00     0.00      -nan%
  0 requests in 10.00s, 50.31MB read
  Socket errors: connect 0, read 40117, write 0, timeout 0
Requests/sec:      0.00
Transfer/sec:      5.03MB

When we use curl it's OK.

curl -v -H 'Connection: Close' http://127.0.0.1:8082
* About to connect() to 127.0.0.1 port 8082 (#0)
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 8082 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 127.0.0.1:8082
> Accept: */*
> Connection: Close
> 
< HTTP/1.1 200 OK
< Date: Sun, 25 Jan 2015 02:29:33 GMT
< Content-Type: text/html;charset=ISO-8859-1
< Connection: close
< Server: Jetty(7.6.13.v20130916)
> ..........................

When use weighttp without keepAlive option -k or when we add header 'Connection: Close' all requests also fail.

from wrk2.

xfeep avatar xfeep commented on July 21, 2024

And when we use ab it is OK.

$ ab  -c 1  -n 1000  -H 'Connection: close'   http://127.0.0.1:8082/
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 127.0.0.1 (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests


Server Software:        Jetty(7.6.13.v20130916)
Server Hostname:        127.0.0.1
Server Port:            8082

Document Path:          /
Document Length:        1163 bytes

Concurrency Level:      1
Time taken for tests:   0.329 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Total transferred:      1318000 bytes
HTML transferred:       1163000 bytes
Requests per second:    3036.90 [#/sec] (mean)
Time per request:       0.329 [ms] (mean)
Time per request:       0.329 [ms] (mean, across all concurrent requests)
Transfer rate:          3908.82 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:     0    0   0.1      0       2
Waiting:        0    0   0.1      0       2
Total:          0    0   0.1      0       2

Percentage of the requests served within a certain time (ms)
  50%      0
  66%      0
  75%      0
  80%      0
  90%      0
  95%      0
  98%      0
  99%      0
 100%      2 (longest request)

from wrk2.

giltene avatar giltene commented on July 21, 2024

Is the 1 connection (non working) thing above from wrk or wrk2? (hard to tell from cmd line, because the wrk2 command is still "wrk")

If [the original] wrk doesn't work right with your jetty setup, I'd open the issue with wrk (https://github.com/wg/wrk). I don't really know that much about wrk's detailed inner workings, and wrk2 is focused purely on the latency measurement part (and the associated rate limiting need). It it is determined to be a problem in wrk and gets fixed there, I'd happily merge the fix back here.

I'd try the same line against other web servers before posting though, as (for example) it seems to work against the build-in apache in my mac. And as you note above, the same seems to work with your jetty server and the road gens, so this may be a wrk+jetty specific issue.

from wrk2.

xfeep avatar xfeep commented on July 21, 2024

It is wrk which need not to be with -R option but wrk2 does need it.
In short, when with -H 'Connection: close' option

  • wrk/wrk2 + jetty fails
  • weighttp + jetty fails
  • ab + jetty ok
  • curl + jetty ok

from wrk2.

xfeep avatar xfeep commented on July 21, 2024

Hi @giltene ,

wrk can be fixed with this patch

I have tried it on wrk2 it is OK!

from wrk2.

giltene avatar giltene commented on July 21, 2024

Cool! I assume @wg will commit something into wrk for this soon. I'll apply the same once he does...

from wrk2.

xfeep avatar xfeep commented on July 21, 2024

Hi @giltene the latest source of wrk has been merged with this path from the commit wg/wrk@522ec60 .

from wrk2.

dfdx avatar dfdx commented on July 21, 2024

Has this issue been fixed? I can see that the corresponding line in net.c has been changed, but I experience the same issue with latest master of wrk2: I can get very good performance without Connection: Close, but with it almost all requests just fail.

from wrk2.

janmejay avatar janmejay commented on July 21, 2024

@xfeep I noticed this and close-wait connection accumulation under load (leading to lower rate due to established connections depleting) with a build off master (as of a few days back). This #33 solves that problem, it'll be very useful if you can merge this locally (or fetch master from the fork) and see if you still experience the same issue.

from wrk2.

janmejay avatar janmejay commented on July 21, 2024

I just tried this with an arbitrarily picked website (sourceware.org) and it seems to work correctly (verified using curl -v that connection was left intact otherwise, and Connection: close was indeed respected by the website).

The scenario that I fixed was similar, where server was closing the connections after response-write (an opinionated choice for load-balancing reasons). I didn't need 'close' header, but essentially the effect was exactly the same (as far as connection life-cycle goes).

from wrk2.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.