GithubHelp home page GithubHelp logo

dropwizard / dropwizard-kafka Goto Github PK

View Code? Open in Web Editor NEW
20.0 12.0 12.0 447 KB

A convenience library for Apache Kafka integration in a Dropwizard service.

License: Apache License 2.0

Java 100.00%
dropwizard-kafka kafka-client kafka-consumers kafka

dropwizard-kafka's Introduction

dropwizard-kafka

Build Quality Gate Status Maven Central

Provides easy integration for Dropwizard applications with the Apache Kafka client.

This bundle comes with out-of-the-box support for:

  • YAML Configuration integration
  • Producer and Consumer lifecycle management
  • Producer and Cluster connection health checks
  • Metrics integration for the Kafka client
  • An easier way to create/configure Kafka consumers/producers than is offered by the base Kafka client
  • Distributed tracing integration, using the Brave Kafka client instrumentation library.

For more information on Kafka, take a look at the official documentation here: http://kafka.apache.org/documentation/

Dropwizard Version Support Matrix

dropwizard-kafka Dropwizard v1.3.x Dropwizard v2.0.x Dropwizard v2.1.x Dropwizard v3.0.x Dropwizard v4.0.x
v1.3.x
v1.4.x
v1.5.x
v1.6.x
v1.7.x
v1.8.x
v3.0.x
v4.0.x

Usage

Add dependency on library.

Maven:

<dependency>
  <groupId>io.dropwizard.modules</groupId>
  <artifactId>dropwizard-kafka</artifactId>
  <version>1.7.0</version>
</dependency>

Gradle:

compile "io.dropwizard.modules:dropwizard-kafka:$dropwizardVersion"

Basic Kafka Producer

In your Dropwizard Configuration class, configure a KafkaProducerFactory:

@Valid
@NotNull
@JsonProperty("producer")
private KafkaProducerFactory<String, String> kafkaProducerFactory;

Then, in your Application class, you'll want to do something similar to the following:

private final KafkaProducerBundle<String, String, ExampleConfiguration> kafkaProducer = new KafkaProducerBundle<String, String, ExampleConfiguration>() {
    @Override
    public KafkaProducerFactory<String, String> getKafkaProducerFactory(ExampleConfiguration configuration) {
        return configuration.getKafkaProducerFactory();
    }
};

@Override
public void initialize(Bootstrap<ExampleConfiguration> bootstrap) {
    bootstrap.addBundle(kafkaProducer);
}

@Override
public void run(ExampleConfiguration config, Environment environment) {
    final PersonEventProducer personEventProducer = new PersonEventProducer(kafkaProducer.getProducer());
    environment.jersey().register(new PersonEventResource(personEventProducer));
}

Configure your factory in your config.yml file:

producer:
  type: basic
  bootstrapServers:
    - 127.0.0.1:9092
    - 127.0.0.1:9093
    - 127.0.0.1:9094
  name: producerNameToBeUsedInMetrics
  keySerializer:
    type: string
  valueSerializer:
    type: string
  acks: all
  retries: 2147483647 # int max value
  maxInFlightRequestsPerConnection: 1
  maxPollBlockTime: 10s
  security:
    securityProtocol: sasl_ssl
    sslProtocol: TLSv1.2
    saslMechanism: PLAIN
    saslJaas: "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"<username>\" password=\"<password>\";"

Basic Kafka Consumer

In your Dropwizard Configuration class, configure a KafkaConsumerFactory:

@Valid
@NotNull
@JsonProperty("consumer")
private KafkaConsumerFactory<String, String> kafkaConsumerFactory;

Then, in your Application class, you'll want to do something similar to the following:

private final KafkaConsumerBundle<String, String, ExampleConfiguration> kafkaConsumer = new KafkaConsumerBundle<String, String, ExampleConfiguration>() {
    @Override
    public KafkaConsumerFactory<String, String> getKafkaConsumerFactory(ExampleConfiguration configuration) {
        return configuration.getKafkaConsumerFactory();
    }
};

@Override
public void initialize(Bootstrap<ExampleConfiguration> bootstrap) {
    bootstrap.addBundle(kafkaConsumer);
}

@Override
public void run(ExampleConfiguration config, Environment environment) {
    final PersonEventConsumer personEventConsumer = new PersonEventConsumer(kafkaConsumer.getConsumer());
    personEventConsumer.startConsuming();
}

Configure your factory in your config.yml file:

consumer:
  type: basic
  bootstrapServers:
    - 127.0.0.1:9092
    - 127.0.0.1:9093
    - 127.0.0.1:9094
  consumerGroupId: consumer1
  name: consumerNameToBeUsedInMetrics
  keyDeserializer:
    type: string
  valueDeserializer:
    type: string
  security:
    securityProtocol: sasl_ssl
    sslProtocol: TLSv1.2
    saslMechanism: PLAIN
    saslJaas: "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"<username>\" password=\"<password>\";"

Using an older version of the Kafka Client

This library should remain backwards compatible, such that you can override the version of the kafka client that this library includes. If this becomes no longer possible, we will need to create separate branches for differing major versions of the Kafka client.

For example, say you would like to use version 1.1.1 of the Kafka client. One option would be to explicitly define a dependency on version 1.1.1 of kafka-clients before you declare a dependency on dropwizard-kafka.

<dependencies>
  <dependency>
    <groupId>org.apache.kafka</groupId>
    <artifactId>kafka-clients</artifactId>
    <version>1.1.1</version>
  </dependency>
  <dependency>
    <groupId>io.dropwizard</groupId>
    <artifactId>dropwizard-kafka</artifactId>
    <version>${dropwizard.version}</version>
  </dependency>
</dependencies>

Adding support for additional serializers and/or deserializers

In order to support additional serializers or deserializers, you'll need to create a new factory:

@JsonTypeName("my-serializer")
public class MySerializerFactory extends SerializerFactory {

    @NotNull
    @JsonProperty
    private String someConfig;

    public String getSomeConfig() {
        return someConfig;
    }

    public void setSomeConfig(final String someConfig) {
        this.someConfig = someConfig;
    }


    @Override
    public Class<? extends Serializer> getSerializerClass() {
        return MySerializer.class;
    }
}

Then you will need to add the following files to your src/main/resources/META-INF/services directory in order to support Jackson polymorphic serialization:

File named io.dropwizard.jackson.Discoverable:

io.dropwizard.kafka.serializer.SerializerFactory

File named io.dropwizard.kafka.serializer.SerializerFactory:

package.name.for.your.MySerializerFactory

dropwizard-kafka's People

Contributors

dependabot-preview[bot] avatar dependabot[bot] avatar joschi avatar jplock avatar onecricketeer avatar pstackle avatar renovate[bot] avatar saeedshahab avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dropwizard-kafka's Issues

New undocumented constructor for Consumer

The KafkaConsumerBundle does not have any empty constructor, but one that takes a Collection<String> topics and a ConsumerRebalanceListener.

It does not seem like the topics actually does anything as I still, manually have to call subscribe / unsubscribe in my consumer. Is this correct?

Also, could someone please provide some documentation on what to implement in the ConsumerRebalanceListener. It has two methods, but is it OK to leave them empty? Do I have to do some manual commit of offsets here?

new ConsumerRebalanceListener() {
              @Override
              public void onPartitionsRevoked(final Collection<TopicPartition> collection) {
                // Intentionally empty
              }

              @Override
              public void onPartitionsAssigned(final Collection<TopicPartition> collection) {
                // Intentionally empty
              }
            }

Add AdminClient

Please add Factory for org.apache.kafka.clients.admin.AdminClient

bundle creation dependency cycle ?

The 1.5.0 (at least) version of the library has the KafkaConsumerBundle constructor accepting a list of topics and a rebalance listener.

My understanding of the consumer rebalance listener is that it is a mechanism that allows the consumer to be warned of the rebalancing event so it has an opportunity to commit currently processing offsets.
That means that the rebalance listener needs a Consumer instance. Which it cannot have at that point since it is the bundle that is being created that can provide such instance.

I can circumvent by having a setter on the rebalancer listener for the consumer instance but that is confusing.

Kind of same remark for the topics argument, we don't have access to the configuration in the bootstrap() step so we cannot load the topics from configuration at that point.

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Repository problems

These problems occurred while renovating this repository. View logs.

  • WARN: Base branch does not exist - skipping

Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

  • Update kafka-client.version to v3.7.0 (release/2.1.x) (minor) (org.apache.kafka:kafka_2.13, org.apache.kafka:kafka-streams-test-utils, org.apache.kafka:kafka-server-common, org.apache.kafka:kafka-raft, org.apache.kafka:kafka-metadata, org.apache.kafka:kafka-streams, org.apache.kafka:kafka-clients)
  • Update kafka-client.version to v3.7.0 (release/3.0.x) (minor) (org.apache.kafka:kafka_2.13, org.apache.kafka:kafka-streams-test-utils, org.apache.kafka:kafka-server-common, org.apache.kafka:kafka-raft, org.apache.kafka:kafka-metadata, org.apache.kafka:kafka-streams, org.apache.kafka:kafka-clients)
  • Update kafka-client.version to v3.7.0 (release/4.0.x) (minor) (org.apache.kafka:kafka_2.13, org.apache.kafka:kafka-streams-test-utils, org.apache.kafka:kafka-server-common, org.apache.kafka:kafka-raft, org.apache.kafka:kafka-metadata, org.apache.kafka:kafka-streams, org.apache.kafka:kafka-clients)
  • Click on this checkbox to rebase all open PRs at once

Ignored or Blocked

These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.

Detected dependencies

Branch release/2.1.x
github-actions
.github/workflows/build.yml
  • dropwizard/workflows main
  • dropwizard/workflows main
.github/workflows/release.yml
  • dropwizard/workflows main
maven
pom.xml
  • io.dropwizard.modules:module-parent 2.1.2
  • org.apache.kafka:kafka-clients 3.6.2
  • org.apache.kafka:kafka-clients 3.6.2
  • org.apache.kafka:kafka-streams 3.6.2
  • org.apache.kafka:kafka-metadata 3.6.2
  • org.apache.kafka:kafka-raft 3.6.2
  • org.apache.kafka:kafka-server-common 3.6.2
  • org.apache.kafka:kafka-streams-test-utils 3.6.2
  • org.apache.kafka:kafka_2.13 3.6.2
  • org.apache.kafka:kafka_2.13 3.6.2
  • io.zipkin.brave:brave-instrumentation-kafka-clients 6.0.3
  • org.springframework.kafka:spring-kafka-test 3.1.4
  • org.apache.maven.plugins:maven-failsafe-plugin 3.2.5
  • org.apache.maven.plugins:maven-enforcer-plugin 3.4.1
maven-wrapper
.mvn/wrapper/maven-wrapper.properties
  • maven 3.9.6
Branch release/3.0.x
github-actions
.github/workflows/build.yml
  • dropwizard/workflows main
  • dropwizard/workflows main
.github/workflows/release.yml
  • dropwizard/workflows main
maven
pom.xml
  • io.dropwizard.modules:module-parent 3.0.2
  • org.apache.kafka:kafka-clients 3.6.2
  • org.apache.kafka:kafka-clients 3.6.2
  • org.apache.kafka:kafka-streams 3.6.2
  • org.apache.kafka:kafka-metadata 3.6.2
  • org.apache.kafka:kafka-raft 3.6.2
  • org.apache.kafka:kafka-server-common 3.6.2
  • org.apache.kafka:kafka-streams-test-utils 3.6.2
  • org.apache.kafka:kafka_2.13 3.6.2
  • org.apache.kafka:kafka_2.13 3.6.2
  • io.zipkin.brave:brave-instrumentation-kafka-clients 6.0.3
  • org.springframework.kafka:spring-kafka-test 3.1.4
  • org.apache.maven.plugins:maven-failsafe-plugin 3.2.5
  • org.apache.maven.plugins:maven-enforcer-plugin 3.4.1
maven-wrapper
.mvn/wrapper/maven-wrapper.properties
  • maven 3.9.6
Branch release/4.0.x
github-actions
.github/workflows/build.yml
  • dropwizard/workflows main
  • dropwizard/workflows main
.github/workflows/release.yml
  • dropwizard/workflows main
.github/workflows/trigger-release.yml
  • dropwizard/workflows main
maven
pom.xml
  • io.dropwizard.modules:module-parent 4.0.2
  • org.apache.kafka:kafka-clients 3.6.2
  • org.apache.kafka:kafka-clients 3.6.2
  • org.apache.kafka:kafka-streams 3.6.2
  • org.apache.kafka:kafka-metadata 3.6.2
  • org.apache.kafka:kafka-raft 3.6.2
  • org.apache.kafka:kafka-server-common 3.6.2
  • org.apache.kafka:kafka-streams-test-utils 3.6.2
  • org.apache.kafka:kafka_2.13 3.6.2
  • org.apache.kafka:kafka_2.13 3.6.2
  • io.zipkin.brave:brave-instrumentation-kafka-clients 6.0.3
  • org.springframework.kafka:spring-kafka-test 3.1.4
  • org.apache.maven.plugins:maven-failsafe-plugin 3.2.5
  • org.apache.maven.plugins:maven-enforcer-plugin 3.4.1
maven-wrapper
.mvn/wrapper/maven-wrapper.properties
  • maven 3.9.6

  • Check this box to trigger a request for Renovate to run again on this repository

Schema registry

Hi,

Do we have support for kafka schema registry?

Regards,
Kishore

Switch to a semantic versioning scheme?

The current versioning scheme (pinning to the version of Dropwizard used in the project, and then having an incrementing counter: i.e., 1.3.12-2) seems to have some drawbacks that could be addressed by using semantic versioning. Namely, it's impossible to tell what a new version contains without looking at and comprehending release notes.

I think the main drawback with switching to SemVer would be the loss of obvious Dropwizard compatibility, which could be solved by having a compatibility matrix of sorts.

Discussion started here: https://groups.google.com/forum/?hl=en#!topic/dropwizard-dev/M97n4KXKyFA

KafkaProducerBundle and Topics

Hi,

Relatively new to dropwizard (and new to Kafka!), but was looking at using this module as it seemed like the best way to get started.

In #22 this comment talks about KafkaProducerBundle not having a default constructor.

From looking at the doco and the source code, it would seem I need to pass in the topics to the constructor.

If I understand the lifecycle of bundles etc, doesn't this mean that the topics can't come from Configuration?

Am I missing something by wanting to retrieve the topics from configuration?

Is this ready for GA?

I'm asking because I tried adding it to my project and the basic usage example in the README.md doesn't seem to work...

Also, what's with KafkaProducerBundler expecting the the dropwizard configuration type to inherit from javax.security.auth.login.Configuration rather than io.dropwizard.Configuration? Is this a bug?

Topic Management from Configuration

Now that AdminClient was added, I suggest adding a way that, during the lifecycle startup, then topics are created if they don't already exist.

Relevant methods:

Configuration suggestion

topics:
  - name: foo
    partitions: 3
    replicationFactor: 3
  # topic with additional configuration
  - name: bar
    partitions: 1
    replicationFactor: 5
    config:
      cleanup.policy: compact

Additionally, would be nice to have a healthcheck that verifies topic existance + configuration details, but that can be future issue.

Example code illustrating usage

The README is a good first step at docs, but its been suggested that there be code examples illustrating the usage of this library.

Add SASL plaintext security configuration

At the moment, dropwizard-kafka does not support SASL plaintext authentication to Kafka server. The difference between SASL SSL and plaintext security is the absence of ssl.protocol configuration key, whose value is set to TLSv1.2, during setup.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.