GithubHelp home page GithubHelp logo

bojand / ghz Goto Github PK

View Code? Open in Web Editor NEW
2.9K 28.0 255.0 29.44 MB

Simple gRPC benchmarking and load testing tool

Home Page: https://ghz.sh

License: Apache License 2.0

Go 80.43% JavaScript 14.42% CSS 0.06% HTML 4.29% Shell 0.02% Makefile 0.49% Dockerfile 0.30%
grpc hacktoberfest

ghz's Introduction


Logo

ghz

Release Build Status Go Report Card License Donate Buy me a coffee

gRPC benchmarking and load testing tool.

Documentation

All documentation at ghz.sh.

Install

Download

  1. Download a prebuilt executable binary for your operating system from the GitHub releases page.
  2. Unzip the archive and place the executable binary wherever you would like to run it from. Additionally consider adding the location directory in the PATH variable if you would like the ghz command to be available everywhere.

Homebrew

brew install ghz

Compile

Clone

git clone https://github.com/bojand/ghz

Build using make

make build

Build using go

cd cmd/ghz
go build .

Install using go >= 1.16

go install github.com/bojand/ghz/cmd/ghz@latest

Install using Docker

DOCKER_BUILDKIT=1 docker build --output=/usr/local/bin --target=ghz-binary-built https://github.com/bojand/ghz.git

Usage

usage: ghz [<flags>] [<host>]

Flags:
  -h, --help                     Show context-sensitive help (also try --help-long and --help-man).
      --config=                  Path to the JSON or TOML config file that specifies all the test run settings.
      --proto=                   The Protocol Buffer .proto file.
      --protoset=                The compiled protoset file. Alternative to proto. -proto takes precedence.
      --call=                    A fully-qualified method name in 'package.Service/method' or 'package.Service.Method' format.
  -i, --import-paths=            Comma separated list of proto import paths. The current working directory and the directory of the protocol buffer file are automatically added to the import list.
      --cacert=                  File containing trusted root certificates for verifying the server.
      --cert=                    File containing client certificate (public key), to present to the server. Must also provide -key option.
      --key=                     File containing client private key, to present to the server. Must also provide -cert option.
      --cname=                   Server name override when validating TLS certificate - useful for self signed certs.
      --skipTLS                  Skip TLS client verification of the server's certificate chain and host name.
      --insecure                 Use plaintext and insecure connection.
      --authority=               Value to be used as the :authority pseudo-header. Only works if -insecure is used.
      --async                    Make requests asynchronous as soon as possible. Does not wait for request to finish before sending next one.
  -r, --rps=0                    Requests per second (RPS) rate limit for constant load schedule. Default is no rate limit.
      --load-schedule="const"    Specifies the load schedule. Options are const, step, or line. Default is const.
      --load-start=0             Specifies the RPS load start value for step or line schedules.
      --load-step=0              Specifies the load step value or slope value.
      --load-end=0               Specifies the load end value for step or line load schedules.
      --load-step-duration=0     Specifies the load step duration value for step load schedule.
      --load-max-duration=0      Specifies the max load duration value for step or line load schedule.
  -c, --concurrency=50           Number of request workers to run concurrently for const concurrency schedule. Default is 50.
      --concurrency-schedule="const"
                                 Concurrency change schedule. Options are const, step, or line. Default is const.
      --concurrency-start=0      Concurrency start value for step and line concurrency schedules.
      --concurrency-end=0        Concurrency end value for step and line concurrency schedules.
      --concurrency-step=1       Concurrency step / slope value for step and line concurrency schedules.
      --concurrency-step-duration=0
                                 Specifies the concurrency step duration value for step concurrency schedule.
      --concurrency-max-duration=0
                                 Specifies the max concurrency adjustment duration value for step or line concurrency schedule.
  -n, --total=200                Number of requests to run. Default is 200.
  -t, --timeout=20s              Timeout for each request. Default is 20s, use 0 for infinite.
  -z, --duration=0               Duration of application to send requests. When duration is reached, application stops and exits. If duration is specified, n is ignored. Examples: -z 10s -z 3m.
  -x, --max-duration=0           Maximum duration of application to send requests with n setting respected. If duration is reached before n requests are completed, application stops and exits. Examples: -x 10s -x 3m.
      --duration-stop="close"    Specifies how duration stop is reported. Options are close, wait or ignore. Default is close.
  -d, --data=                    The call data as stringified JSON. If the value is '@' then the request contents are read from stdin.
  -D, --data-file=               File path for call data JSON file. Examples: /home/user/file.json or ./file.json.
  -b, --binary                   The call data comes as serialized binary message or multiple count-prefixed messages read from stdin.
  -B, --binary-file=             File path for the call data as serialized binary message or multiple count-prefixed messages.
  -m, --metadata=                Request metadata as stringified JSON.
  -M, --metadata-file=           File path for call metadata JSON file. Examples: /home/user/metadata.json or ./metadata.json.
      --stream-interval=0        Interval for stream requests between message sends.
      --stream-call-duration=0   Duration after which client will close the stream in each streaming call.
      --stream-call-count=0      Count of messages sent, after which client will close the stream in each streaming call.
      --stream-dynamic-messages  In streaming calls, regenerate and apply call template data on every message send.
      --reflect-metadata=        Reflect metadata as stringified JSON used only for reflection request.
  -o, --output=                  Output path. If none provided stdout is used.
  -O, --format=                  Output format. One of: summary, csv, json, pretty, html, influx-summary, influx-details. Default is summary.
      --skipFirst=0              Skip the first X requests when doing the results tally.
      --count-errors             Count erroneous (non-OK) resoponses in stats calculations.
      --connections=1            Number of connections to use. Concurrency is distributed evenly among all the connections. Default is 1.
      --connect-timeout=10s      Connection timeout for the initial connection dial. Default is 10s.
      --keepalive=0              Keepalive time duration. Only used if present and above 0.
      --name=                    User specified name for the test.
      --tags=                    JSON representation of user-defined string tags.
      --cpus=12                  Number of cpu cores to use.
      --debug=                   The path to debug log file.
  -e, --enable-compression       Enable Gzip compression on requests.
  -v, --version                  Show application version.

Args:
  [<host>]  Host and port to test.

Go Package

report, err := runner.Run(
    "helloworld.Greeter.SayHello",
    "localhost:50051",
    runner.WithProtoFile("greeter.proto", []string{}),
    runner.WithDataFromFile("data.json"),
    runner.WithInsecure(true),
)

if err != nil {
    fmt.Println(err.Error())
    os.Exit(1)
}

printer := printer.ReportPrinter{
    Out:    os.Stdout,
    Report: report,
}

printer.Print("pretty")

Development

Golang 1.11+ is required.

make # run all linters, tests, and produce code coverage
make build # build the binaries
make lint # run all linters
make test # run tests
make cover # run tests and produce code coverage

V=1 make # more verbosity
OPEN_COVERAGE=1 make cover # open code coverage.html after running

Credit

Icon made by Freepik from www.flaticon.com is licensed by CC 3.0 BY

License

Apache-2.0

ghz's People

Contributors

arinto avatar asaff1 avatar bojand avatar bufdev avatar chenrui333 avatar dependabot[bot] avatar elmanelman avatar ezsilmar avatar fenollp avatar haunt98 avatar jbub avatar keitaf avatar kenju avatar michaelperel avatar mnotti avatar mrnonz avatar nlohmann avatar pbabbicola avatar pgehin-leansys avatar raakasf avatar ricardo-kh avatar sprivitera avatar steverawlins-zebra avatar sujitdmello avatar tab1293 avatar timowang1991 avatar tp avatar tristanang avatar vipul-sharma20 avatar zymoticb avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ghz's Issues

Works on Mac but not on Windows

Proto file(s)

{
"proto": "xxxxx/user.proto",
"call": "user.User.GetVerificationCode",
"n": 20,
"c": 5,
"d": {
"mobile": "1"
},
"insecure": true,
"host": "192.168.1.95:50051"
}

Command line arguments / config

xxx/ghz -config xxx/config.json

Expected Behavior

`Summary:
Count: 20
Total: 79.92 ms
Slowest: 48.07 ms
Fastest: 8.89 ms
Average: 19.70 ms
Requests/sec: 250.26

Response time histogram:
8.886 [1] |∎∎∎
12.804 [12] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
16.722 [0] |
20.641 [0] |
24.559 [2] |∎∎∎∎∎∎∎
28.477 [0] |
32.396 [0] |
36.314 [2] |∎∎∎∎∎∎∎
40.232 [0] |
44.150 [0] |
48.069 [3] |∎∎∎∎∎∎∎∎∎∎

Latency distribution:
10% in 9.06 ms
25% in 9.72 ms
50% in 12.05 ms
75% in 35.45 ms
90% in 48.01 ms
95% in 48.07 ms
0% in 0 ns
Status code distribution:
[OK] 20 responses`

Actual Behavior

`Summary:
Count: 20
Total: 19.09 s
Slowest: 0 ns
Fastest: 0 ns
Average: 0 ns
Requests/sec: 0.00

Response time histogram:

Latency distribution:
Status code distribution:
[Unavailable] 20 responses

Error distribution:
[5] rpc error: code = Unavailable desc = transport is closing
[15] rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: `

Hi, I get the right result on Mac, but I can't run it on Windows. (version v0.31.0)

Add reflection support

Instead of specify the proto file we could use reflection to build the client for making the requests. The user would still have to provide the call details that has to correctly match the reflection results. Reflection is only supported by a subset of languages.

how to send request strcut with bytes?

syntax = "proto3";
package protobuf;

message Request{
    string method_name = 1;
    bytes arg = 2;
}

message Response{
    bytes res = 1;
}

service RemoteCall{
  rpc get_result(Request) returns(Response){}
  rpc get_feature(Request) returns(Response){}
  rpc get_detectResult(Request) returns(Response){}
  rpc get_emotion(Request) returns(Response){}
}

the protobuf file as above , how can i send bytes on your cli tools?

Add `-name` option

Some tool (ex. vegeta) provide option to name the run. This can be useful for organizing the results.

Add option:

-name string
    	Test name

And add it to the reporting.

Headers

It would be a good thing to be able to set headers.

loca

Proto file(s)

Command line arguments / config

Expected Behavior

{Please write here}

Actual Behavior

{Please write here}

Steps to Reproduce (including precondition)

{Please write here}

Your Environment

  • OS: {Please write here}
  • ghz version: {Please write here}

InfluxDB line protocol is invalid

The ghz output documentation provides samples of influxdb-details output. However this is not valid line protocol. If I try to copy-paste one of the influxdb-details examples into a POST, I get the following error:

Request:
POST /write?db=ghz HTTP/1.1
Host: localhost:8086
Content-Type: application/x-www-form-urlencoded
ghz_detail,proto="/testdata/greeter.proto",call="helloworld.Greeter.SayHello",host="0.0.0.0:50051",n=1000,c=50,qps=0,z=0,timeout=20,dial_timeout=10,keepalive=0,data="{"name":"{{.InputName}}"}",metadata="{"rn":"{{.RequestNumber}}"}",hasError=false latency=5157328,error=,status=OK 681023506

Response:
{
"error": "unable to parse 'ghz_detail,proto="/testdata/greeter.proto",call="helloworld.Greeter.SayHello",host="0.0.0.0:50051",n=1000,c=50,qps=0,z=0,timeout=20,dial_timeout=10,keepalive=0,data="{\"name\":\"{{.InputName}}\"}",metadata="{\"rn\":\"{{.RequestNumber}}\"}",hasError=false latency=5157328,error=,status=OK 681023506': missing field value"
}

There are two problems with the request, specifically the last two field values:

  1. Line protocol does not allow sending nulls, so "error=" is an invalid input
  2. status=OK needs to be sent as a string, otherwise it will try to read "OK" as a boolean data type.

Add a param for the settings / config file

Presently all settings can be set via grpcannon.json file if present in the same path as the grpcannon executable. It may be useful to have a flag argument for the settings file. Example:

grpcannon -config /path/to/config.json

Won't call to microservice built using go-micro framework

I have a service running locally, registered in consul (which I use as service registry):

    2018/10/08 16:31:29 [DEBUG] http: Request PUT /v1/agent/check/pass/service:enta-nimax-b51023cb-cb2f-11e8-b79b-a08cfd74f549?note= (299.368µs) from=127.0.0.1:34708
    2018/10/08 16:31:38 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
    2018/10/08 16:31:38 [DEBUG] agent: Service "enta-nimax-b51023cb-cb2f-11e8-b79b-a08cfd74f549" in sync
    2018/10/08 16:31:38 [DEBUG] agent: Check "service:enta-nimax-b51023cb-cb2f-11e8-b79b-a08cfd74f549" in sync

I tried to run latest pre-built version of ghz downloaded from the download page and then ran:

$ ./ghz -proto="/home/comtom/Projects/src/github.com/TodayTix/ttproto-provider-interface/proto/ProviderService/ProviderService.proto" -call="enta-nimax-b51023cb-cb2f-11e8-b79b-a08cfd74f549.GenerateShows" -d="{}" 127.0.0.1:34708 -insecure 

but failed with: **cannot find service "service:enta-nimax-b51023cb-cb2f-11e8-b79b-a08cfd74f549"**

also tried service:enta-nimax-b51023cb-cb2f-11e8-b79b-a08cfd74f549.GenerateShows as service name and just enta-nimax. the same happened. What I'm doing wrong?

Add more detailed metrics ?

Add more metrics such as duration of different parts and sizes.

The gRPC stats provides additional types for instrumenting detailed events such as ConnBegin, InHeader, InTrailer, etc, along with providing size data. It may be useful to collect this information in the results and report. But I am really not sure what info specifically would be useful?

transport: Error while dialing reading server HTTP response: unexpected EOF

Proto file(s)

syntax = "proto3";
package cnnsql;

service Prediction {
  rpc Predict(Request) returns (Result){}
}

message Request {
    string url = 1;
    string ip = 2;
}

message Result {
    int32 type = 1;
}

Command line arguments / config

cnnsql.json:

{
    "proto": "cnnsql.proto",
    "call": "cnnsql.Prediction.Predict",
    "d": {
        "url": "_%3D1498179095094%26list%3Dsh600030"
    },
    "insecure": true,
    "host": "127.0.0.1:8889"
}

Expected Behavior

{
	"type": 1
}

Actual Behavior

$ ./ghz -config cnnsql.json 

Summary:
  Count:        200
  Total:        11.94 ms
  Slowest:      0 ns
  Fastest:      0 ns
  Average:      0 ns
  Requests/sec: 0.00

Response time histogram:

Latency distribution:
Status code distribution:
  [Unavailable]   200 responses   

Error distribution:
  [200]   rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing reading server HTTP response: unexpected EOF" 

Steps to Reproduce (including precondition)

  1. generate python grpc code:
$ pip install grpcio grpcio-tools
$ python -m grpc_tools.protoc -I. --python_out=. --grpc_python_out=. cnnsql.proto
$ ls
cnnsql_pb2.py       cnnsql_pb2_grpc.py
  1. write server code in server.py
# coding: utf-8
from concurrent import futures
import time
import logging

import grpc

import cnnsql_pb2
import cnnsql_pb2_grpc

_ONE_DAY_IN_SECONDS = 60 * 60 * 24


class PredictionService(cnnsql_pb2_grpc.PredictionServicer):

    def Predict(self, request, context):
        response = cnnsql_pb2.Result(type=1)
        return response


def serve():
    server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
    cnnsql_pb2_grpc.add_PredictionServicer_to_server(PredictionService(), server)
    server.add_insecure_port('[::]:8889')
    server.start()
    try:
        print("Server started.")
        while True:
            time.sleep(_ONE_DAY_IN_SECONDS)
    except KeyboardInterrupt:
        server.stop(0)


if __name__ == '__main__':
    logging.basicConfig()
    serve()
  1. run server:
$ python server.py
  1. load test:
$ ./ghz -config cnnsql.json 

Your Environment

  • OS: 10.14.3 (18D109)
  • ghz version: 0.31.0
  • Python env version:
    • python 3.6.5
    • grpcio 1.19.0
    • grpcio-tools 1.19.0

Need help understanding weird behavior

We are load testing an application and getting some different results on changing the different parameters for ghz. Details below

./ghz -config test.json
test.json:
{
    "z": "5m",
    "c": 5,
    "q": 10,
    "protoset": "./some.protoset",
    "call": "someservice",
    "host": "<POD_IP>:50051",
    "D": "../request/request_data.json"
}

Results

Summary:
  Count:        14824
  Total:        300040.46 ms
  Slowest:      2968.93 ms
  Fastest:      9.43 ms
  Average:      38.53 ms
  Requests/sec: 49.41

Response time histogram:
  9.430 [1]     |
  305.380 [14813]       |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
  601.330 [5]   |
  897.279 [0]   |
  1193.229 [0]  |
  1489.179 [0]  |
  1785.129 [0]  |
  2081.079 [0]  |
  2377.029 [0]  |
  2672.979 [0]  |
  2968.929 [5]  |

Latency distribution:
  10% in 33.73 ms
  25% in 34.86 ms
  50% in 35.92 ms
  75% in 36.94 ms
  90% in 38.23 ms
  95% in 40.98 ms
  99% in 80.23 ms
Status code distribution:
  [OK]  14824 responses

A configuration with -n 1000 instead of -z "5m" yielded:

Summary:
  Count:        1000
  Total:        41512.74 ms
  Slowest:      7395.77 ms
  Fastest:      10.36 ms
  Average:      151.54 ms
  Requests/sec: 24.09

Response time histogram:
  10.360 [1]    |
  748.902 [974] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
  1487.443 [10] |
  2225.984 [5]  |
  2964.525 [0]  |
  3703.066 [0]  |
  4441.607 [0]  |
  5180.148 [5]  |
  5918.690 [0]  |
  6657.231 [0]  |
  7395.772 [5]  |

Latency distribution:
  10% in 26.56 ms
  25% in 27.45 ms
  50% in 28.55 ms
  75% in 42.80 ms
  90% in 296.37 ms
  95% in 402.27 ms
  99% in 5054.40 ms
Status code distribution:
  [OK]  1000 responses

Can you please help in explaining the different behaviors? Is our understanding wrong
In my understanding n- represents the number of requests and z represent the length of test and there is no other difference.

thanks

ghz command line vs ghz code output difference

Hi @bojand ,

I am using ghz tool for single channel performance testing. I am also looking into github code and running this code.

Now I am getting different output when I am running from binary file and from source code.
Binary File Running input
ghz -config config.json

Output
image

Source Code Running input
go run cmd/ghz/main.go cmd/ghz/config.go -config config.json

Output
image

As you can see here that I am getting big difference in both the results. So I want to know what is happening and which once is accurate.

Note: I am not changing your source code while running it

  • OS: MacOS 10.13.1
  • ghz version:0.30.0

Can't figure out how to use ghz-web

Heya. Having been unblocked by the fix to #55 (thanks again), I've tried to use the web frontend. I seem to be having issues with the binary -- could you have a look, please? 😃

Proto file(s)

NA

Command line arguments / config

web.toml is

protoset="gateway.protoset"
cert="/hab/svc/gateway/config/service.crt"
key="/hab/svc/gateway/config/service.key"
cacert="/hab/svc/gateway/config/root_ca.crt"
cname="gateway"
call="gateway.api.users.UsersMgmt/GetUsers"

[m]
"api-token"="bASZ1UdqkTjEqK3V-h1npK5tyfs="

host="10.0.2.15:2001"

CLI call is ./ghc-web -config web.toml

Expected Behavior

It starts.

Actual Behavior

# ./ghz-web -config web.toml
panic: Binary was compiled with 'CGO_ENABLED=0', go-sqlite3 requires cgo to work. This is a stub

goroutine 1 [running]:
main.main()
        /Users/bdjurkovic/dev/golang/ghz/cmd/ghz-web/main.go:60 +0x4c5
#

Steps to Reproduce (including precondition)

Get 0.26.0, run it with the config above.

Your Environment

  • OS: osx
  • ghz version: 0.26.0

Improve documentation

The ghz-web documentation could be improved to provide more details and instructions on the intended workflow for and usage. Also a walk-through would probably be useful.

Why I always get "unknown format:" when I use json config file

Proto file(s)
helloworld.proto

syntax = "proto3";

option java_multiple_files = true;
option java_package = "io.grpc.examples.helloworld";
option java_outer_classname = "HelloWorldProto";

package helloworld;

// The greeting service definition.
service Greeter {
    // Sends a greeting
    rpc SayHello (HelloRequest) returns (HelloReply) {
    }
}

// The request message containing the user's name.
message HelloRequest {
    string name = 1;
}

// The response message containing the greetings
message HelloReply {
    string message = 1;
}

Command line arguments / config
config.json is in the same directory with helloworld.proto, its content:

{
  "insecure": true,
  "proto": "helloworld.proto",
  "call": "helloworld.Greeter.SayHello",
  "total": 200,
  "concurrency": 10,
  "data": {
    "name": "Joe"
  },
  "host": "localhost:8099"
}

Describe the bug
when I used this command ghz --config=config.json, I got unknown format:
when I used another command ghz -config ./config.json, I got ghz: error: strconv.ParseUint: parsing "onfig": invalid syntax, try --help

I can success running rpc test with pure command Line ghz --insecure --proto ./helloworld.proto --call helloworld.Greeter.SayHello -d '{"name":"Joe"}' localhost:8099
How can I change my config so that I can use json config file?

Environment

  • OS: macOS 10.14.1
  • ghz: 0.35.0

Additional context
My grpc service is started in localhost with port 8099

Bidirectional streams can block

First of all, thanks for a great load testing tool for working with GRPC.

I am trying to test a server that handles backpressure, which involves using the HTTP2 flow control.
For that purpose I prepared the config for ghz that would cause the flow window to fill (grpc-java sets the window to 1MB). Even though the tests should pass, ghz blocks and finishes with

rpc error: code = DeadlineExceeded desc = context deadline exceeded

Command line arguments / config

ghz -config config.json

config.jsoncontents:
{ "proto": "greeter.proto", "call": "manualflowcontrol.StreamingGreeter.SayHelloStreaming", "n": 1, "c": 1, "t": 25, "host": "0.0.0.0:50051", "insecure": true, "d": [ {"name":"Joe"} # repeat 200000 times ] }

Expected Behavior

Status code distribution: [OK] 1 responses

Actual Behavior

Error distribution: [1] rpc error: code = DeadlineExceeded desc = context deadline exceeded

Steps to Reproduce (including precondition)

Here's the server implementation which you can test against (remove the 100ms sleep in line 80 to make your testing easier):

https://github.com/grpc/grpc-java/blob/master/examples/src/main/java/io/grpc/examples/manualflowcontrol/ManualFlowControlServer.java

Your Environment

  • OS: MacOS
  • ghz version: 0.31.0

why the gRPC connect not closed in server side

When using this great tool to make 2K concurrent request, I find that the sock connection is still kept open:

# lsof -p 102372 | grep -c "sock"
1394

After double checked the code in this tool, it seems that the close should be called.

Do you know is this a optimized design in gRPC that to keep a connection pool on demand ? or something else wrong.

Consider supporting json style field names

given a proto message

message SomeRequest{
    string some_field_name = 1;
}

the following would be an acceptable json payload when calling grpcannon

'{"someFieldName":"value"}'

currently, the above json would appear to return Unknown field name

send binary data in stringified JSON

We need to send binary data in our stringified JSON request metadata.

For example:

./ghz -proto grpc-service.proto -call service.Put -d {"primarykey": "6C-getList:8bb1e1f6f-loadtest1k","value": "½ƒ]Ä0/™)ûÆ´Ÿù.å³P˼ñÁ»9¦W¡w¦Ë–s‹Ã]±ÔÖ‡,l`×Ñtz(´7,ÆŸ","ttl": 300} -c 1 -z 60s grpc.endpoint.service.com:8080

We tried to send using -D flag with a path to a JSON file with the binary data as well. But I think it is either the shell or ghz can not interpret the binary data therefore fails. Are there any suggestion on how we can send binary data using ghz?

We get errors like invalid character 'Ä' looking for beginning of value.

rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: authentication handshake failed: tls: first record does not look like a TLS handshake"

When I using this great tool to test my service I got a rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: authentication handshake failed: tls: first record does not look like a TLS handshake".

My service works well:
#docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8ecd78ab381d test:my-test "/bin/sh -c 'python …" 32 minutes ago Up 32 minutes 0.0.0.0:50051->50051/tcp sad_goodall

And the command I used:
./ghz -proto ../api/grpc/test_service.proto -call mytest.TEST.evaluate -c 5 -n 15 -D ./input.json -o ./test_result.html -O html -name emacs-load-testing localhost:50051

ServerStreaming call metadata specification

Hi there,
While trying to call my server using ghz, I see on the wire that the initial setup does not have any metadata, and eventually tcp session gets reset (it works fine from ballerina client, but I would like to use the nice load testing and reporting facilities :)

Command in question (tried many -m variations without luck):

ghz -name "Testing" -c 1 -n 100 -insecure -proto ../server/target/grpc/HelloWorld.proto -d '{"req":"Sam"}' -m '{"IsServerStreaming":"true"}' -call service.HelloWorld/lotsOfReplies localhost:9095

.proto in question:
syntax = "proto3";
package service;
import "google/protobuf/wrappers.proto";
service HelloWorld {
rpc lotsOfReplies(google.protobuf.StringValue) returns (stream google.protobuf.StringValue);
}

Your thoughts on this would be highly appreciated!

David

not honoring -z option flag for duration

OS: Mac OS
grpcannon version: 0.4.1

When running grpcannon using the -z option flag, grpcannon is still defaulting to running 200 requests instead of honoring the time duration value.

example:

grpcannon -z 1m -cert cert.pem --proto employeedirectory.proto -call directory.position.GetEmployee -d '{"name":"Steve", "position":"doctor"}' -M metadata.json jobs.search.com:8080

Add metrics for receiving messages in stream calls ?

It may be useful to measure the amount of time between individual messages received in streaming calls. While probably relatively simple to collect, more design and detail is needed now how this would look in the reporting.

Improve file and stdin data to support streaming input ?

Currently when data is provided using a file or stdin we read the full data. We chould improve this and support reading and parsing as a stream, probably using json.Decoder. However this may cause some breakage or limitation with supporting a bit more flexible data as we do now. For example presently if a single JSON object is passed in for a client streaming request and bidi request, we use that for all writes to the client. This may not be possible with a streaming input. Similarly for client streaming calls, it would have to be an array input. And also not sure how that should work. We should probably send + record until the end of the stream, and then replay the payload as writes for all subsequent calls.

TLS: no way to provide client cert/key

Proto file(s)

None

Command line arguments / config

None

Expected Behavior

Given a gRPC setup where both the client and the server are required to provide a TLS cert, and given the root CA cert, as well as a cert/key pair for the client, I would like to be able to use ghz to benchmark the service's performance.

Actual Behavior

I can provide a root CA cert using -cert, and I can provide a server name override using -cname, but there's no way to set a client cert/key.

Steps to Reproduce (including precondition)

Have a server that requires the client to provide a TLS cert, and try to use ghz with it.

Sorry, this is brief -- It could be fleshed out, if need be; please let me know if this already is supported in some way I haven't found.

Your Environment

  • OS: Darwin
  • ghz version: 0.24.0

How to run multiple gRPC requests in one test?

This is a general question more than a bug, but I'm trying to figure out if the ghz framework is capable of running multiple gRPC requests in a flow type end to end test? This would imply some kind of request/response chaining which would make it more complex, but wondering if this would be possible. Thanks!

arguments -o and -O not working

Command line arguments / config

./ghz -proto -call com.proto.test.pingPong -skipTLS -insecure -D <REQUEST_PATH> -c 10 -n 200 localhost:8080 -O "csv" -o <PATH_TO_CSV>

Expected Behavior

Create the file if not exist and save the output in that file in csv format

Actual Behavior

Nothing happens, shows output in stdout

Steps to Reproduce (including precondition)

Run the command

Your Environment

  • OS: Ubuntu 16.04
  • ghz version: 0.30.0

transport: authentication handshake failed: EOF

Hi,

I am unable to hit the GRPC service

Config:

{
    "proto": "C:/Users/disha.duggal/Documents/JMeterTests/GRPCTests/MM/my.proto",
    "call": "mypackage.myservice.Status",
    "n": 2000,
    "c": 50,
    "d": {
        "param1": "adhajdl",
        "param2":56750,
        "param3":"WEB"
    },
    "m": {
        "foo": "bar",
        "trace_id": "{{.RequestNumber}}",
        "timestamp": "{{.TimestampUnix}}"
    },
    "x": "10s",
    "host": "localhost:5001"
}

Output:

PS C:\Users\disha.duggal\Documents\JMeterTests\GRPCTests\MM>  ghz -config .\MM_test.json

Summary:
  Count:        2000
  Total:        76.00 ms
  Slowest:      0.00 ms
  Fastest:      0.00 ms
  Average:      0.00 ms
  Requests/sec: 0.00

Response time histogram:

Latency distribution:
Status code distribution:
Error distribution:
  [2000]        rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: authentication handshake failed: EOF"

rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error

I gave a try to follow our example, but I got some issues as follows.

[root@test /root/Workspace/go/src/github.com/ghz/testdata]
#netstat -lanp|grep httpd
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      751/httpd           
unix  3      [ ]         STREAM     CONNECTED     18157    751/httpd            

[root@test /root/Workspace/go/src/github.com/ghz/testdata]
#ghz -proto ./greeter.proto -call helloworld.Greeter.SayHello -d '{"name":"Joe"}' 0.0.0.0:80

Summary:
  Count:	200
  Total:	13.97 ms
  Slowest:	0.00 ms
  Fastest:	0.00 ms
  Average:	-9223372036854.78 ms
  Requests/sec:	14319.87

Response time histogram:

Latency distribution:
Status code distribution:
Error distribution:
  [200]	rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: authentication handshake failed: tls: oversized record received with length 20527"

[root@test /root/Workspace/go/src/github.com/ghz/testdata]
#ghz -insecure -proto ./greeter.proto -call helloworld.Greeter.SayHello -d '{"name":"Joe"}' 0.0.0.0:80

Summary:
  Count:	200
  Total:	18.78 ms
  Slowest:	0.00 ms
  Fastest:	0.00 ms
  Average:	-9223372036854.78 ms
  Requests/sec:	10649.79

Response time histogram:

Latency distribution:
Status code distribution:
Error distribution:
  [150]	rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: <nil>
  [50]	rpc error: code = Unavailable desc = transport is closing

how to solve this issue? thanks in advance!

Using different data for parallel calls

Command line arguments / config

./ghz -insecure -proto document.proto
-call DocService.CreateDoc
-n 2
-c 2
-D ../SummaryDocs.json
0.0.0.0:3000

Expected Behavior

2 independent calls should go to the server, with 2 messages picked from the SummaryDocs.json file

Actual Behavior

2 calls sent, with same data

I want to be able to send custom data for every call, instead of same data being used for all 'n' calls. How can that be achieved?

  • OS: Mojave 10.14.3
  • ghz version: 0.32.0

received the unexpected content-type "text/plain"

Hello,

I am using go-micro to develop my micro service and I use consul as service registry (it listens on 8500).

I tried:

ghz -proto ./hello.proto -insecure -call hello.Hello.Hi -d '{"name": "Joe"}' 127.0.0.1:8500

it said

Latency distribution:
Status code distribution:
Error distribution:
  [143]	rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: <nil>
  [57]	rpc error: code = Unavailable desc = transport is closing

If I change 8500 to the port on which service listens, I got

Latency distribution:
Status code distribution:
Error distribution:
  [200]	rpc error: code = Internal desc = transport: received the unexpected content-type "text/plain"

Here are two questions:
1, why this error happens "received the unexpected content-type "text/plain""?
2, how to make it work with consul?

Add basic templating to input

Hi! This tool is super useful - thanks for putting it together!

It would be very handy, for the particular use case that i have, if it were possible to use standard Go templating to swap in variables related to the state of the current run. In particular, what i was hoping for is a unique numeric identifier that can be templated in for each individual request.

Not sure if it's feasible or not, but it'd be nifty!

Add thresholds

It would be useful to have a threshold settings within project options for different statistical metrics (ie fastest, slowest, average, percentiles) so we can report which ones fail the threshold. Additionally we could have a "key metric" setting that would dictate if a test run / report fails based on the threshold setting for that metric (in addition to errors). So if the key metric fails the threshold even if no errors, the test run / report would be considered as a fail status.

This would involve changes to database and schemas.

We could graph thresholds along with the metrics in graphs. For example in change over time we could have the thresholds (or at least the "key metric" threshold) on the change over time. Additionally we could have it marked in histogram perhaps and the comparison charts.

Add config for host to bind to ?

Currently the config for ghz-web app does not allow binding to a specific hostname and we automatically bind to localhost. Perhaps it would be useful to specify the host to bind to. This adds a little bit of complication to frontend app as we need to communicate that setting (and probably whole config) to the frontend app, which we currently do not.

Failed to load imports for "sr.proto". Syntax error: unexpected $unk

i have config_test.json:

{
	"proto": "C:/Users/1/Desktop/GHZ/protorepo/sr.proto",
	"call": "grpc.refe.SR.GEInfo",
	"host": "localhost:30058",
	"c": 2,
	"n": 4,
	"x": "1s",
	"o": "C:/Users/1/Desktop/GHZ/output",
	"O": "html",
	"insecure": true,
	"i": [
		"C:/Users/1/Desktop/GHZ/protorepo/grpc-proto/src/"
	]
}

my sr.proto looks like:

syntax = "proto3";

package grpc.refe;

import "proto/common/e.proto";
import "proto/common/si.proto";
import "google/protobuf/empty.proto";

option java_multiple_files = true;
option objc_class_prefix = "ABC";

message EResponse {
    repeated proto.common.E         e = 1;
}

service SR {
rpc GEInfo(google.protobuf.Empty) returns (EResponse) {}
}

my e.proto looks like

syntax = "proto3";

package proto.common;

option java_multiple_files = true;
option objc_class_prefix = "ABC";

message E {
int32 id = 1;
string name = 2;
}

my si.proto looks like:

syntax = "proto3";

package proto.common;

option java_multiple_files = true;
option objc_class_prefix = "ABC";

i start ghz with ghz -config .\config_test.json
and get error:
failed to load imports for "sr.proto": proto/common/si.proto:1:1: syntax error: unexpected $unk

all files for import located: C:\Users\1\Desktop\GHZ\protorepo\grpc-proto\src\proto\common

i cant understand what this error means and how to fix it.
for this method i need only e.proto and empty.proto,
but i have another methods that need si.proto
For now i dont know what to do and need help.

using ghz v 0.22.0 on Windows

Use more descriptive flags for ghz CLI ?

While the current flags are compact and succinct, it may be worth while to change (some) flags to longer more descriptive format to improve UX. Potentially we could keep the short format as optional shorter and quicker alternatives.

For example some ideas for potential changes:

-c -> -concurrency
-n -> -requests? keep the same?
-q -> -qps (or -rate?)
-t -> -timeout
-z -> -duration
-x -> -max-duration
-d -> -data
-D -> -data-path
-b -> -binary-data
-B -> -binary-data-path
-m -> -metadata
-M -> -metadata-path
-si -> -stream-interval
-rmd -> -reflect-metadata
-o -> -out
-O -> -format
-i -> -import-paths
-T -> -dial-timeout
-L -> -keepalive
-v -> -version
-h -> -help

It seems most benchmarking tools opt for short options. But from research a handful offer descriptive flags as well. Some ideas or inspirations: autocannon and vegeta.

This may likely be a breaking change for the config file format.

To support both short and long format we may switch from standard flag module to something more robust, like kingpin perhaps.

@peter-edge Feel free to share any thoughts.

requests per second

1000 requests took 1246 MS - should be Requests/sec: 802.14294 rather than 802142.94.

  Count:	1000
  Total:	1246.66 ms
  Slowest:	458.03 ms
  Fastest:	1.25 ms
  Average:	52.86 ms
  Requests/sec:	802142.94

Response time histogram:
  1.248 [1]	|
  46.926 [717]	|∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
  92.604 [82]	|∎∎∎∎∎
  138.282 [54]	|∎∎∎
  183.960 [46]	|∎∎∎
  229.638 [38]	|∎∎
  275.316 [29]	|∎∎
  320.994 [21]	|∎
  366.672 [7]	|
  412.350 [3]	|
  458.028 [2]	|

Latency distribution:
  10% in 2.53 ms
  25% in 4.75 ms
  50% in 12.71 ms
  75% in 60.01 ms
  90% in 185.27 ms
  95% in 244.34 ms
  99% in 325.27 ms
Status code distribution:
  [OK]	1000 responses

invalid character '\'' looking for beginning of value

When I try to run ghz on my proto file, I get above mentioned error.

ghz -insecure -proto service.proto -call adapter.ScoreService.GetScore -d '{"body":"test", "fields": {"key1":"test1", "key2":"test2"}}' localhost:5300

Grpc Server is running on my local. My sample protobuf file is below -

syntax = "proto3";

package adapter;


service ScoreService {
    rpc GetScore(ScoreRequest) returns (ScoreResponse) {}
}

message ScoreRequest {
    string body = 1;
    map<string, string> fields = 2;
}

message ScoreResponse {
    int32 score = 1;
}

Help to identify missing piece will be appreciated. Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.