GithubHelp home page GithubHelp logo

rpc-perf's Introduction

rpc-perf

rpc-perf is a tool for measuring the performance of network services. While it has historically focused on caching services, the support is expanding to cover HTTP and PubSub.

License: Apache-2.0 License: MIT Build Status: CI

Content

Getting started

Follow the build instructions to build rpc-perf from this repository and take a look at the example configurations in the configs folder. There you will find some examples for each of the supported protocols. The examples provide a starting point and may need some changes to produce a representative workload for your testing.

Building from source

To build rpc-perf from source, you will need a current Rust toolchain. If you don't have one, you can use rustup or follow the instructions on rust-lang.org.

Now that you have a Rust toolchain installed, you can clone and build the project using the cargo command.

git clone https://github.com/iopsystems/rpc-perf
cd rpc-perf
cargo build --release

This will produce a binary at target/release/rpc-perf which you may copy to a more convenient location. Check out the getting started for more information on how to use rpc-perf

Contributing

If you want to submit a patch, please follow these steps:

  1. create a new issue
  2. fork on github & clone your fork
  3. create a feature branch on your fork
  4. push your feature branch
  5. create a pull request linked to the issue

rpc-perf's People

Contributors

brayniac avatar dependabot[bot] avatar mihirn avatar bryce-anderson avatar swlynch99 avatar kevyang avatar yangxi avatar tejom avatar noxiouz avatar krispraws avatar pietrovalerio avatar thegeekpirate avatar maximebedard avatar bruuuuuuuce avatar juliaferraioli avatar jacktuck avatar wurikiji avatar aetimmes avatar insom avatar

Stargazers

Sergi Vos ☁ avatar  avatar Brenden Soares avatar Vivek Kumar avatar antx avatar Howard Lau avatar Shani Pribadi avatar Adam Zell avatar john spurling avatar Francisco Diaz  avatar Denis Kolodin avatar Gabriel Segatti avatar Juncheng Yang avatar  avatar Yao Yue avatar Yuri avatar

Watchers

 avatar

rpc-perf's Issues

add a connection pool abstraction

Follow-up for after #9 is merged to refactor connection/session
handling logic to move towards a shared connection pool model.

This will help reduce some of the duplicated logic within each
client and allow us to ensure unified behaviors.

Error building rdkafka

rdkafka fails to build due to a mismatched types error. The twitter rpc-perf repository builds successfully, but this fork does not.

Expected behavior

Successful build.

Actual behavior

error[E0308]: mismatched types
--> /home/amartin/.cargo/registry/src/index.crates.io-6f17d22bba15001f/rdkafka-0.25.0/src/config.rs:153:17
|
150 | rdsys::rd_kafka_conf_get(
| ------------------------ arguments to this function are incorrect
...
153 | buf.as_mut_ptr() as *mut i8,
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected *mut u8, found *mut i8
|
= note: expected raw pointer *mut u8
found raw pointer *mut i8
note: function defined here
--> /home/amartin/.cargo/registry/src/index.crates.io-6f17d22bba15001f/rdkafka-sys-3.0.0+1.6.0/src/bindings.rs:922:12
|
922 | pub fn rd_kafka_conf_get(
| ^^^^^^^^^^^^^^^^^

For more information about this error, try rustc --explain E0308.
error: could not compile rdkafka (lib) due to previous error
warning: build failed, waiting for other jobs to finish...

Steps to reproduce the behavior

Error occurs when building on my system.

use uri-schemes to encode endpoint specific details

As part of #9 we find that there will be client specific configuration details
such as a cache name, or database name, or some other aspects. Some
endpoint types may already have uri-schemes in-use. For others we can
adopt some unofficial uri-schemes to use.

For example, the redis uri-scheme can be found here:
https://www.iana.org/assignments/uri-schemes/prov/redis
and can be used to specify the host, port, database name, authentication
details, etc.

By moving towards this type of approach, we should be able to centralize
any protocol specific configuration details.

refactor buffer size configuration

Buffer size configuration is duplicated in cache clients and pubsub configurations. This could be refactored into shared configuration.

Add a metric to rate limiter for “skipped token” due to token bucket being full

When server falls behind rate limit, requests are not enqueued in a timely manner. Recording the number of requests skipped with a metric (counter?) will provide a quick view of the number of requests that would have been sent had the server kept up. This metric can also be used to implement smart stop criterion (e.g. if X% of requests were skipped for time period Y then stop the experiment).

refactor output/exposition

Currently, we repeat a lot of the basic exposition logic between rpc-perf,
Pelikan, and Rezolus. We should move exposition logic into rustcommon
and allow it to be shared across these projects. This likely involves
changes to allow scoped metrics and metric metadata.

Independently, the CLI output of rpc-perf should be refactored to use
structs and Display impls to help centralize the format strings and allow
clearer understanding of what the output will look like.

Additionally, the metric snapshotting to calculate rates can be refactored
to move the timestamp into the snapshot and allow delta/rate calculations
between complete snapshots. This should reduce some of the verbosity
and duplication for various output formats (json and cli).

SLO enforcement mechanisms

#9 initially included a basic implementation of SLO enforcement. It
was removed from that PR because we want a more general approach
to be able to enforce SLO as defined by a variety of criteria or some
combination of criteria. For example, we may want to have stop
conditions based on latency SLOs at various percentiles, or on error
rates, or timeouts, or ...

This issue is to track the future development of that functionality.

kafka config comments incorrect

Some of the comments in the Kafka config example incorrectly reference Momento. Please review and correct the config comments.

consider refactoring towards traits/supertraits for workload

Currently we're using enum dispatch and things become quite
unwieldily in match statements as protocol complexity goes up.

It's particularly bad in the workload generation logic, which needs
to be able to generate any supported request.

We may be able to leverage traits and/or dynamic dispatch to help
break up some of the larger blocks and group functionality. Eg:
plain key-value and hash operations can be handled in separate
groups.

do we need a different strategy for compressible payloads?

Currently, we take a naive approach of a payload (either a value for cache or a message for pubsub) having a fixed number of random bytes to achieve a target compression ratio. These bytes are grouped at the head of the payload (ignoring the additional header we pack into pubsub messages to track timing and integrity).

We should do some analysis to determine if the entropy needs to be spread throughout the payload, either by shuffling the bytes, or using a reduced set of symbols, or ...

It would be interesting to know if this even effects the compression/decompression overheads in an appreciable way for expected value/message sizes.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.