GithubHelp home page GithubHelp logo

mnogu / gatling-kafka Goto Github PK

View Code? Open in Web Editor NEW
68.0 68.0 77.0 67 KB

A Gatling stress test plugin for Apache Kafka protocol

License: Apache License 2.0

Scala 100.00%
gatling kafka stress-test

gatling-kafka's People

Contributors

jtjeferreira avatar mnogu avatar noelip avatar osleonard avatar rroyoo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

gatling-kafka's Issues

localhost used despite remote host is set in BOOTSTRAP_SERVERS_CONFIG

hi
I am trying to use provided examples, they worked perfectly on localhost, but do not work with remote kafka.

despite I am setting

ProducerConfig.BOOTSTRAP_SERVERS_CONFIG -> "node-perf-01.ops.com:9092", // list of Kafka broker hostname and port pairs

gatling attemps to connect localhost and then fails with exception

2018-04-11 14:35:43,977 [DEBUG] [kafka-producer-network-thread | producer-1] o.a.k.c.Metadata.update(Metadata.java:241) - Updated cluster metadata version 2 to Cluster(id = hfosHSPVSyi-Cyz1JfSsHQ, nodes = [localhost:9092 (id: 1 rack: null)], partitions = [Partition(topic = events, partition = 0, leader = 1, replicas = [1,], isr = [1,])])
2018-04-11 14:35:43,986 [DEBUG] [kafka-producer-network-thread | producer-1] o.a.k.c.NetworkClient.initiateConnect(NetworkClient.java:496) - Initiating connection to node 1 at localhost:9092.

QUESTION: Am I missing some config elsewhere?

NOTE: I have seen similar issue here zendesk/maxwell#360
But I am not sure if it is relevant. Also it is not clear for me how to configure advertised.listeners in gatling, it is not to a producer option...

Please advise.

package load

import java.util.Date

import com.github.mnogu.gatling.kafka.Predef._
import io.gatling.core.Predef._
import org.apache.kafka.clients.producer.ProducerConfig

class KafkaSimulation extends Simulation {

  val kafkaConf = kafka
    .topic("test_events") // Kafka topic name
    .properties( // Kafka producer configs
    Map(
      ProducerConfig.ACKS_CONFIG -> "1",

      ProducerConfig.BOOTSTRAP_SERVERS_CONFIG -> "node-perf-01.ops.com:9092", // list of Kafka broker hostname and port pairs

      ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG ->
        "org.apache.kafka.common.serialization.StringSerializer", // in most cases, StringSerializer or ByteArraySerializer

      ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG ->
        "org.apache.kafka.common.serialization.StringSerializer"
    )
  )

  val scn = scenario("Kafka Test")
    .exec(
      kafka("request")
        // message to send
        .send[String]("foo" + (new Date()).toString))


  val loadProfile = scn.inject(
    rampUsers(10) over (5)
  )

  setUp(loadProfile)
    .protocols(kafkaConf)
    .maxDuration(20)


}
	acks = 1
	batch.size = 16384
	block.on.buffer.full = false
	bootstrap.servers = [node-perf-01.ops.com:9092]
	buffer.memory = 33554432
	client.id = 
	compression.type = none
	connections.max.idle.ms = 540000
	interceptor.classes = null
	key.serializer = class org.apache.kafka.common.serialization.StringSerializer
	linger.ms = 0
	max.block.ms = 60000
	max.in.flight.requests.per.connection = 5
	max.request.size = 1048576
	metadata.fetch.timeout.ms = 60000
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.sample.window.ms = 30000
	partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
	receive.buffer.bytes = 32768
	reconnect.backoff.ms = 50
	request.timeout.ms = 30000
	retries = 0
	retry.backoff.ms = 100
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = null
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	timeout.ms = 30000
	value.serializer = class org.apache.kafka.common.serialization.StringSerializer

2018-04-11 14:35:43,488 [INFO] [pool-1-thread-1] o.a.k.c.p.ProducerConfig.logAll(AbstractConfig.java:180) - ProducerConfig values: 
	acks = 1
	batch.size = 16384
	block.on.buffer.full = false
	bootstrap.servers = [node-perf-01.ops.com:9092]
	buffer.memory = 33554432
	client.id = producer-1
	compression.type = none
	connections.max.idle.ms = 540000
	interceptor.classes = null
	key.serializer = class org.apache.kafka.common.serialization.StringSerializer
	linger.ms = 0
	max.block.ms = 60000
	max.in.flight.requests.per.connection = 5
	max.request.size = 1048576
	metadata.fetch.timeout.ms = 60000
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.sample.window.ms = 30000
	partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
	receive.buffer.bytes = 32768
	reconnect.backoff.ms = 50
	request.timeout.ms = 30000
	retries = 0
	retry.backoff.ms = 100
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = null
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	timeout.ms = 30000
	value.serializer = class org.apache.kafka.common.serialization.StringSerializer

2018-04-11 14:35:43,493 [DEBUG] [pool-1-thread-1] o.a.k.c.m.Metrics.sensor(Metrics.java:296) - Added sensor with name bufferpool-wait-time
2018-04-11 14:35:43,495 [DEBUG] [pool-1-thread-1] o.a.k.c.m.Metrics.sensor(Metrics.java:296) - Added sensor with name buffer-exhausted-records
2018-04-11 14:35:43,496 [DEBUG] [pool-1-thread-1] o.a.k.c.Metadata.update(Metadata.java:241) - Updated cluster metadata version 1 to Cluster(id = null, nodes = [node-perf-01.ops.com:9092 (id: -1 rack: null)], partitions = [])
2018-04-11 14:35:43,502 [DEBUG] [pool-1-thread-1] o.a.k.c.m.Metrics.sensor(Metrics.java:296) - Added sensor with name connections-closed:
2018-04-11 14:35:43,502 [DEBUG] [pool-1-thread-1] o.a.k.c.m.Metrics.sensor(Metrics.java:296) - Added sensor with name connections-created:
2018-04-11 14:35:43,503 [DEBUG] [pool-1-thread-1] o.a.k.c.m.Metrics.sensor(Metrics.java:296) - Added sensor with name bytes-sent-received:
2018-04-11 14:35:43,503 [DEBUG] [pool-1-thread-1] o.a.k.c.m.Metrics.sensor(Metrics.java:296) - Added sensor with name bytes-sent:
2018-04-11 14:35:43,504 [DEBUG] [pool-1-thread-1] o.a.k.c.m.Metrics.sensor(Metrics.java:296) - Added sensor with name bytes-received:
2018-04-11 14:35:43,504 [DEBUG] [pool-1-thread-1] o.a.k.c.m.Metrics.sensor(Metrics.java:296) - Added sensor with name select-time:
2018-04-11 14:35:43,504 [DEBUG] [pool-1-thread-1] o.a.k.c.m.Metrics.sensor(Metrics.java:296) - Added sensor with name io-time:
2018-04-11 14:35:43,511 [DEBUG] [pool-1-thread-1] o.a.k.c.m.Metrics.sensor(Metrics.java:296) - Added sensor with name batch-size
2018-04-11 14:35:43,511 [DEBUG] [pool-1-thread-1] o.a.k.c.m.Metrics.sensor(Metrics.java:296) - Added sensor with name compression-rate
2018-04-11 14:35:43,511 [DEBUG] [pool-1-thread-1] o.a.k.c.m.Metrics.sensor(Metrics.java:296) - Added sensor with name queue-time
2018-04-11 14:35:43,512 [DEBUG] [pool-1-thread-1] o.a.k.c.m.Metrics.sensor(Metrics.java:296) - Added sensor with name request-time
2018-04-11 14:35:43,512 [DEBUG] [pool-1-thread-1] o.a.k.c.m.Metrics.sensor(Metrics.java:296) - Added sensor with name produce-throttle-time
2018-04-11 14:35:43,512 [DEBUG] [pool-1-thread-1] o.a.k.c.m.Metrics.sensor(Metrics.java:296) - Added sensor with name records-per-request
2018-04-11 14:35:43,513 [DEBUG] [pool-1-thread-1] o.a.k.c.m.Metrics.sensor(Metrics.java:296) - Added sensor with name record-retries
2018-04-11 14:35:43,513 [DEBUG] [pool-1-thread-1] o.a.k.c.m.Metrics.sensor(Metrics.java:296) - Added sensor with name errors
2018-04-11 14:35:43,513 [DEBUG] [pool-1-thread-1] o.a.k.c.m.Metrics.sensor(Metrics.java:296) - Added sensor with name record-size-max
2018-04-11 14:35:43,514 [DEBUG] [kafka-producer-network-thread | producer-1] o.a.k.c.p.i.Sender.run(Sender.java:130) - Starting Kafka producer I/O thread.
2018-04-11 14:35:43,515 [INFO] [pool-1-thread-1] o.a.k.c.u.AppInfoParser.<init>(AppInfoParser.java:83) - Kafka version : 0.10.1.1
2018-04-11 14:35:43,516 [INFO] [pool-1-thread-1] o.a.k.c.u.AppInfoParser.<init>(AppInfoParser.java:84) - Kafka commitId : f10ef2720b03b247
2018-04-11 14:35:43,516 [DEBUG] [pool-1-thread-1] o.a.k.c.p.KafkaProducer.<init>(KafkaProducer.java:332) - Kafka producer started
Simulation load.JbtSimulation started...
2018-04-11 14:35:43,597 [DEBUG] [GatlingSystem-akka.actor.default-dispatcher-3] i.g.c.c.Controller.io$gatling$core$controller$Controller$$anonfun$3$$$anonfun$2(Controller.scala:59) - Setting up max duration
2018-04-11 14:35:43,617 [DEBUG] [GatlingSystem-akka.actor.default-dispatcher-4] i.g.c.c.i.Injector.startUser(Injector.scala:131) - Start user #1
2018-04-11 14:35:43,623 [DEBUG] [GatlingSystem-akka.actor.default-dispatcher-4] i.g.c.c.i.Injector.injectStreams(Injector.scala:123) - Injecting 1 users, continue=false
2018-04-11 14:35:43,624 [INFO] [GatlingSystem-akka.actor.default-dispatcher-3] i.g.c.c.Controller.applyOrElse(Controller.scala:77) - InjectionStopped expectedCount=1
2018-04-11 14:35:43,631 [DEBUG] [kafka-producer-network-thread | producer-1] o.a.k.c.NetworkClient.maybeUpdate(NetworkClient.java:644) - Initialize connection to node -1 for sending metadata request
2018-04-11 14:35:43,631 [DEBUG] [kafka-producer-network-thread | producer-1] o.a.k.c.NetworkClient.initiateConnect(NetworkClient.java:496) - Initiating connection to node -1 at node-perf-01.ops.com:9092.
2018-04-11 14:35:43,748 [DEBUG] [kafka-producer-network-thread | producer-1] o.a.k.c.m.Metrics.sensor(Metrics.java:296) - Added sensor with name node--1.bytes-sent
2018-04-11 14:35:43,748 [DEBUG] [kafka-producer-network-thread | producer-1] o.a.k.c.m.Metrics.sensor(Metrics.java:296) - Added sensor with name node--1.bytes-received
2018-04-11 14:35:43,748 [DEBUG] [kafka-producer-network-thread | producer-1] o.a.k.c.m.Metrics.sensor(Metrics.java:296) - Added sensor with name node--1.latency
2018-04-11 14:35:43,749 [DEBUG] [kafka-producer-network-thread | producer-1] o.a.k.c.n.Selector.pollSelectionKeys(Selector.java:327) - Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1
2018-04-11 14:35:43,749 [DEBUG] [kafka-producer-network-thread | producer-1] o.a.k.c.NetworkClient.handleConnections(NetworkClient.java:476) - Completed connection to node -1
2018-04-11 14:35:43,852 [DEBUG] [kafka-producer-network-thread | producer-1] o.a.k.c.NetworkClient.maybeUpdate(NetworkClient.java:640) - Sending metadata request {topics=[events]} to node -1
2018-04-11 14:35:43,977 [DEBUG] [kafka-producer-network-thread | producer-1] o.a.k.c.Metadata.update(Metadata.java:241) - Updated cluster metadata version 2 to Cluster(id = hfosHSPVSyi-Cyz1JfSsHQ, nodes = [localhost:9092 (id: 1 rack: null)], partitions = [Partition(topic = events, partition = 0, leader = 1, replicas = [1,], isr = [1,])])
2018-04-11 14:35:43,986 [DEBUG] [kafka-producer-network-thread | producer-1] o.a.k.c.NetworkClient.initiateConnect(NetworkClient.java:496) - Initiating connection to node 1 at localhost:9092.
2018-04-11 14:35:43,987 [DEBUG] [kafka-producer-network-thread | producer-1] o.a.k.c.m.Metrics.sensor(Metrics.java:296) - Added sensor with name node-1.bytes-sent
2018-04-11 14:35:43,987 [DEBUG] [kafka-producer-network-thread | producer-1] o.a.k.c.m.Metrics.sensor(Metrics.java:296) - Added sensor with name node-1.bytes-received
2018-04-11 14:35:43,988 [DEBUG] [kafka-producer-network-thread | producer-1] o.a.k.c.m.Metrics.sensor(Metrics.java:296) - Added sensor with name node-1.latency
2018-04-11 14:35:43,988 [DEBUG] [kafka-producer-network-thread | producer-1] o.a.k.c.n.Selector.pollSelectionKeys(Selector.java:365) - Connection with localhost/127.0.0.1 disconnected
java.net.ConnectException: Connection refused
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
	at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:51)
	at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:73)
	at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:323)
	at org.apache.kafka.common.network.Selector.poll(Selector.java:291)
	at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:260)
	at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:236)
	at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:135)
	at java.lang.Thread.run(Thread.java:748)
2018-04-11 14:35:43,988 [DEBUG] [kafka-producer-network-thread | producer-1] o.a.k.c.NetworkClient.handleDisconnections(NetworkClient.java:463) - Node 1 disconnected.
2018-04-11 14:35:44,039 [DEBUG] [kafka-producer-network-thread | producer-1] o.a.k.c.NetworkClient.initiateConnect(NetworkClient.java:496) - Initiating connection to node 1 at localhost:9092.
2018-04-11 14:35:44,039 [DEBUG] [kafka-producer-network-thread | producer-1] o.a.k.c.n.Selector.pollSelectionKeys(Selector.java:365) - Connection with localhost/127.0.0.1 disconnected

Pls provide a Kafka consumer API.

HI,

Thanks for providing Producer Api.
Do you have implemented Kafka consumer API for Gatling test. If you have done that pls share it.

Statistics Information about gatling-kafka load generation?

More of a discussion topic and request here.

I don't think this project currently mentions any statistics or information regardling kafka load generation with gatling so far.

Kafka is a relatively new technology, and as far as I recall any mentions about load/performance testing come to articles like these:

https://engineering.linkedin.com/kafka/benchmarking-apache-kafka-2-million-writes-second-three-cheap-machines

https://grey-boundary.io/load-testing-apache-kafka-on-aws/

which seem to be more of benchmarking or generic stressing of a kafka environment without much regard for custom messages and other requirements specific to an organization. Your tool helps satisfy the need for the customizations needed for load testing a kafka-based environment.

It would be nice to know of some stats regarding usage of this tool (as a point of reference) such as what amount of load has been generated by this plugin like:

  • number of kafka messages sent per second
  • kafka message sizes (in bytes) used in the load
  • number of producers/users generating the load
  • load generation schemes (ramp up/down patterns)
  • how much load generated by gatling-kafka on some given setup (single box hardware, distributed test setup) before gatling/gatling-kafka crapped out from lack of resources OR the generated load was good enough for you to not need to scale further
  • how much load generated against the system under test by gatling-kafka

Other users of gatling-kafka (if there are any so far) could also chime in on this topic. I would provide my input once I've used gatling-kafka for some load testing. For now, I'm focusing on kafkameter (for JMeter) as a fallback first since I'm more familiar with that than gatling. Will move on to try gatling-kafka afterwards if I need more "scale" for load generation.

Release latest version (0.1.2) to Maven Central

The current version on Maven Central is 0.1.0

This version does not work with Gatling 2.2.4, but the latest version of the code (0.1.2) appears to work fine. Could you please update the package on Maven Central?

Run script from my project

Can I run script using kafka plugin from my project
I got an error but when I tried to run it from Gatling files location (C:\gatling-charts-highcharts-bundle-3.6.0\user-files\simulations) it passed successfully,

Any idea?

issues with gatling 3.0

we use this extension in one of the projects, thank you very much for providing this. however we noticed that if we change gatling version to 3.0 to build a new jar, it is throwing compilation errors. Looks like changes in gatling 3.0 protocol implementation, any help on how to fix these is appreciated.

Gatling 3.4 compatibility

Hello.

Not sure if this project is still maintain but if anyone needs it, here is a patch for Gatling 3.4.2 compatibility :
support_for_gatling_3_4.patch.txt

I'd have created a PR but I'm not allowed to create a branch.

These modifications are NOT tested thoroughly, I've just made modification so that the project compile against Gatling 3.4.2 and tested that sending a message to a kafka queue is working (check the BaseSimulation example)

In case anyone needs the sources, I've forked this repository and push the modification to a branch named gatling-3.4 : https://github.com/arthur25000/gatling-kafka/tree/gatling-3.4

Exception when running Gatling Simulation

Hi,

I have installed Gatling, and included the plugin jar.

I have included my user simulation below (and changed the Servers) - this is based on the example provided.

However, when I run this simulation, I get the exception

    09:38:53.581 [ERROR] c.g.m.g.k.a.KafkaRequestAction - 'kafkaRequest-1' failed to execute: Can't cast value Test Message of type class java.lang.String
 into class scala.runtime.Nothing$

Does anyone have any idea what I'm doing wrong?

Thanks,
Hannah

import io.gatling.core.Predef._
import org.apache.kafka.clients.producer.ProducerConfig
import scala.concurrent.duration._

import com.github.mnogu.gatling.kafka.Predef._

class KafkaSimulation extends Simulation {
    val kafkaConf = kafka
    // Kafka topic name
    .topic("GatlingTopic")
    // Kafka producer configs
    .properties(
      Map(
        ProducerConfig.ACKS_CONFIG -> "1",
        ProducerConfig.BOOTSTRAP_SERVERS_CONFIG -> "x:9092,y:9092,z:9092",
        ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG -> "org.apache.kafka.common.serialization.ByteArraySerializer",
        ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG -> "org.apache.kafka.common.serialization.ByteArraySerializer"))

  val scn = scenario("Kafka Test")
    .exec(kafka("request").send("Test Message"))

  setUp(
    scn
      .inject(constantUsersPerSec(10) during(90 seconds)))
    .protocols(kafkaConf)
}

Publish to maven repo and Github releases?

Hi, I see you tag formal releases of this project's code. It would be nice if you:

  • compile the project's code with default config with the releases so it goes into Github release section of this project for one to download without compiling themselves if they use the defaults of the project.
  • compile the project's code with default config to maven repo so that those that use Gatling with maven can pull it into their projects without the manual copy steps (or having to use their own custom maven config or local repo to host the compiled version of gatling-kafka). Or do you already do this? I didn't find it searching the public maven repo.

CheckBuilder

Hi Very nice api. Can you please include a check() functionality as part of the KafkaRequestActionBuilder . like that included in the http construct of the gatling api. This would go a long way to include some custom validation as part of the test. That way can be used for functional test. I already have it running for functional test but missing the check() construct so i can include custom validation check by leveraging the validation included like here https://groups.google.com/forum/#!searchin/gatling/checkbuilder|sort:date/gatling/FoLyGSEiQtw/cKv1NvazAwAJ

Support for Gatling 3.5 and Scala 2.13

Apparently there's already a working version of this extension with Gatling 3.5 and Scala 2.13
I was wondering if/when that version is going to be published to Maven Central so that we can just add it to our dependencies, instead of cloning this repo, assembling the jar file and manually adding it to our project.

Running with Gatling 2.2.3

Found some errors trying to run with gatling 2.2.3.

Any timeline when it will be updated?

Or a workaround maybe?

NoClassDefFoundError

On replicating the whole procedure on my macbook pro 2017, I am able to successfully create a jar as well as link the simulation file. However, while running the simulation I am facing the following error:

Simulation com.github.mnogu.gatling.kafka.test.KafkaSimulation started...
Uncaught error from thread [GatlingSystem-akka.actor.default-dispatcher-4] shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[GatlingSystem]
java.lang.NoClassDefFoundError: io/gatling/commons/util/ClockSingleton$
at com.github.mnogu.gatling.kafka.action.KafkaRequestAction$$anonfun$com$github$mnogu$gatling$kafka$action$KafkaRequestAction$$sendRequest$1.apply(KafkaRequestAction.scala:65)
at com.github.mnogu.gatling.kafka.action.KafkaRequestAction$$anonfun$com$github$mnogu$gatling$kafka$action$KafkaRequestAction$$sendRequest$1.apply(KafkaRequestAction.scala:56)
at io.gatling.commons.validation.Success.map(Validation.scala:32)
...
..
..

Can someone share any thoughts on the probable reason for this?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.