GithubHelp home page GithubHelp logo

strimzi / client-examples Goto Github PK

View Code? Open in Web Editor NEW
73.0 73.0 64.0 272 KB

Example clients for use with Strimzi

Home Page: https://strimzi.io

License: Apache License 2.0

Makefile 3.19% Java 83.20% Dockerfile 4.76% Shell 0.71% Go 7.85% 1C Enterprise 0.28%

client-examples's Introduction

Strimzi

Run Apache Kafka on Kubernetes and OpenShift

Build Status GitHub release License Twitter Follow Artifact Hub

Strimzi provides a way to run an Apache Kafka® cluster on Kubernetes or OpenShift in various deployment configurations. See our website for more details about the project.

Quick Starts

To get up and running quickly, check our Quick Start for Minikube, OKD (OpenShift Origin) and Kubernetes Kind.

Documentation

Documentation for the current main branch as well as all releases can be found on our website.

Roadmap

The roadmap of the Strimzi Operator project is maintained as GitHub Project.

Getting help

If you encounter any issues while using Strimzi, you can get help using:

Strimzi Community Meetings

You can join our regular community meetings:

Resources:

Contributing

You can contribute by:

  • Raising any issues you find using Strimzi
  • Fixing issues by opening Pull Requests
  • Improving documentation
  • Talking about Strimzi

All bugs, tasks or enhancements are tracked as GitHub issues. Issues which might be a good start for new contributors are marked with "good-start" label.

The Dev guide describes how to build Strimzi. Before submitting a patch, please make sure to understand, how to test your changes before opening a PR Test guide.

The Documentation Contributor Guide describes how to contribute to Strimzi documentation.

If you want to get in touch with us first before contributing, you can use:

License

Strimzi is licensed under the Apache License, Version 2.0

Container signatures

From the 0.38.0 release, Strimzi containers are signed using the cosign tool. Strimzi currently does not use the keyless signing and the transparency log. To verify the container, you can copy the following public key into a file:

-----BEGIN PUBLIC KEY-----
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAET3OleLR7h0JqatY2KkECXhA9ZAkC
TRnbE23Wb5AzJPnpevvQ1QUEQQ5h/I4GobB7/jkGfqYkt6Ct5WOU2cc6HQ==
-----END PUBLIC KEY-----

And use it to verify the signature:

cosign verify --key strimzi.pub quay.io/strimzi/operator:latest --insecure-ignore-tlog=true

Software Bill of Materials (SBOM)

From the 0.38.0 release, Strimzi publishes the software bill of materials (SBOM) of our containers. The SBOMs are published as an archive with SPDX-JSON and Syft-Table formats signed using cosign. For releases, they are also pushed into the container registry. To verify the SBOM signatures, please use the Strimzi public key:

-----BEGIN PUBLIC KEY-----
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAET3OleLR7h0JqatY2KkECXhA9ZAkC
TRnbE23Wb5AzJPnpevvQ1QUEQQ5h/I4GobB7/jkGfqYkt6Ct5WOU2cc6HQ==
-----END PUBLIC KEY-----

You can use it to verify the signature of the SBOM files with the following command:

cosign verify-blob --key cosign.pub --bundle <SBOM-file>.bundle --insecure-ignore-tlog=true <SBOM-file>

Strimzi is a Cloud Native Computing Foundation incubating project.

CNCF ><

client-examples's People

Contributors

csalmhof avatar dependabot[bot] avatar frawless avatar im-konge avatar michalxo avatar owencorrigan76 avatar ppatierno avatar scholzj avatar see-quick avatar sknot-rh avatar tombentley avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

client-examples's Issues

Env variable TRACING_SYSTEM cannot be blank

We just got a crash looping container for the quay.io/strimzi-examples/java-kafka-consumer:latest image, as the newly introduced env variable TRACING_SYSTEM cannot be blank.

The error we see is:

Exception in thread "main" java.lang.NullPointerException
	at TracingSystem.forValue(TracingSystem.java:12)
	at KafkaConsumerConfig.fromEnv(KafkaConsumerConfig.java:75)
	at KafkaConsumerExample.main(KafkaConsumerExample.java:22)

It seems the tracing system needs to be set currently, otherwise the application won't run: https://github.com/strimzi/client-examples/blob/main/java/kafka/consumer/src/main/java/KafkaConsumerConfig.java#L75

Replace OpenTelemetry Jaeger exporter with the default OTLP exporter

Strimzi components already moved to use the OTLP exporter (instead of the Jaeger one) for the OpenTelemetry support.
I think that the examples should do the same, as it is the default.
In order to do that, examples should not have the opentelemetry-exporter-jaeger dependency but the opentelemetry-exporter-otlp.
We can even get rid of the OTEL_TRACES_EXPORTER env var in the YAMLs because OTLP is the default protocol for OpenTelemetry. We should also replace OTEL_EXPORTER_JAEGER_ENDPOINT with OTEL_EXPORTER_OTLP_ENDPOINT env var (it uses port 4317).
Finally, this should be anyway tested with a Jaeger backend exposing an OTLP endpoint. This can be enabled starting up Jaeger with COLLECTOR_OTLP_ENABLED=true env var.

[HTTP clients] - wrong defined YAML for http producer

Our basic HTTP clients (i.e., not vertex one) are currently not using the correct ENV to send messages. For instance, [1] producer example defines

- name: STRIMZI_SEND_INTERVAL
   value: "1000"

but from the code it uses [2] STRIMZI_DELAY_MS env. So we should correct this and check other clients that could be defined wrongly as this one example.

[1] - https://github.com/strimzi/client-examples/blob/main/java/http/java-http-producer.yaml
[2] - https://github.com/strimzi/client-examples/blob/main/java/http/java-http-producer/src/main/java/HttpProducerConfig.java#L40

Tracing seems to be enabled be default in HTTP examples

I deployed the HTTP consumer example with the following YAML:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: java-http-consumer
  name: java-http-consumer
spec:
  replicas: 1
  selector:
    matchLabels:
      app: java-http-consumer
  template:
    metadata:
      labels:
        app: java-http-consumer
    spec:
      containers:
      - name: java-http-consumer
        image: quay.io/strimzi-examples/java-http-consumer:latest
        env:
          - name: STRIMZI_HOSTNAME
            value: my-bridge-bridge-service
          - name: STRIMZI_PORT
            value: "8080"
          - name: STRIMZI_TOPIC
            value: kafka-test-apps
          - name: STRIMZI_GROUPID
            value: "my-group"
          - name: STRIMZI_POLL_INTERVAL
            value: "1000"
        resources:
          requests:
            memory: "256Mi"
            cpu: "250m"
          limits:
            memory: "256Mi"
            cpu: "250m"

So I did not enable tracing. Yet, it seems to be enabled by default as I get this in the logs:

Mar 11, 2023 8:07:00 PM io.opentelemetry.sdk.internal.ThrottlingLogger doLog
SEVERE: Failed to export metrics. The request could not be executed. Full error message: Failed to connect to localhost/[0:0:0:0:0:0:0:1]:4317

So it seems like the tracing is permanently enabled. This should be fixed and it should be enabled only when specified in the Deployment.

Kafka consumer don't work anymore after refactoring

I tried to change our previously existing env vars to match the new one after the refactorings.
However, even after following the renaming, an env var seem to be missing.

2022-12-23 09:36:00 INFO  KafkaConsumerExample:26 - KafkaConsumerConfig: KafkaConsumerConfig@70325e14
Exception in thread "main" org.apache.kafka.common.config.ConfigException: Invalid value null for configuration key.deserializer: must be non-null.
	at org.apache.kafka.clients.consumer.ConsumerConfig.appendDeserializerToConfig(ConsumerConfig.java:615)
	at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:666)
	at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:647)
	at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:627)
	at KafkaConsumerExample.main(KafkaConsumerExample.java:47)

This can be reproduced by the following:

  1. Create a file kafka-consumer.env with the following content (taken from the kafka-java-consumer.yaml)
KAFKA_BOOTSTRAP_SERVERS=my-cluster-kafka-bootstrap:9092
STRIMZI_TOPIC=my-topic
KAFKA_GROUP_ID=my-java-kafka-consumer
STRIMZI_LOG_LEVEL=INFO
STRIMZI_MESSAGE_COUNT=1000000
  1. Get latest container image with:
    docker pull quay.io/strimzi-examples/java-kafka-consumer:latest
  2. Start up a container with:
    docker run --rm -it --env-file=kafka-consumer.env quay.io/strimzi-examples/java-kafka-consumer:latest

You'll get the same error message as above.

Remove the message count from HTTP examples

The YAML examples (see here) provide a very bad user experience. They do not set STRIMZI_MESSAGE_COUNT. So it defaults to sending or receiving 10 messages. Once they reach the number:

  • The producer completes. But since it is deployed as Deployment, it will be restarted. So it is essentially crash-looping. While it is actually as designed in the code, everyone seeing it would assume it does not work. This needs to be fixed.
  • The consumer seems to delete the consumer. But it actually keeps running. This makes it even weirder since the Pod runs but doesn't do anything at all. This is again very bad user experience.

Additionally, the message counting in the consumer seems to be badly designed. It checks if the given number of messages was received with equals (this.messageReceived == this.config.getMessageCount()). But the Kafka poll method can return multiple records and might increase the number of received messages by more then one message. As a result, the desired number of messages might be never reached. If we decide to keep this, we should fix this to check the number of messages with greater-than.


Overall, I think the right thing to do would be to remove the message count completely. It seems to be useless to me - I would prefer the pods to run and work constantly. But if we want to keep it, we need to make the example more usable. For example:

  • Set the message count to some much higher number (millions) to not have it stop
  • Use Job instead of Deployment or disable restarting to avoid the crash loops
  • Fix the message counting in the consumer

Define a pattern naming for client examples Docker images

Related to #45, I had a chat with @scholzj and even because we are moving to quay.io it could be the right time to find a good pattern for image names for different examples (so even moving away from using "hello world" :-))
We were thinking about something like:

<language>-<technology|framework|library>-<producer|consumer|streams>

For example:

  • java-kafka-producer ... using the native Java Kafka client
  • go-sarama-consumer ... in Go using Sarama as library
  • java-vertx-producer ... using the Java Vert.x client
  • java-http-vertx-producer .. using HTTP with Vert.x
  • go-http-consumer ... in Go using HTTP

Wdyt? @scholzj @tombentley @samuel-hawker ?

Move the Vert.x HTTP clients examples to a newer Vert.x version

The Vert.x HTTP clients examples are using a really old Vert.x version 3.7.1.
We should move to a newer one (i.e. 4.3.7) but it would mean a refactoring because some bigger changes in Vert.x (i.e. Promise instead of Future and more).
Anyway, it should be done to avoid temporary fixes like the one I made via #106 to override Jackson Databind with an older version just because of the old Vert.x one.

Moving to Java 11 as runtime

As other related Strimzi repos moved to use Java 11 as runtime, we should do the same for the images in this repo.

Inconsistency in created and used topics

In the hello-world-consumer.yaml and hello-world-producer.yaml files, the used topic is my-topic as set in the TOPIC env var.
In the deployment.yaml and deployment-ssl.yaml the same TOPIC env var is set at hello-world-test-apps but the YAML file contains a KafkaTopic resource with name my-topic.
It means that my-topic will be created by the Topic Operator while the hello-world-test-apps will be created on the fly just because the Kafka cluster has the auto.create.topics.enable property set as true by default. It also means that the created my-topic will not be used by consumer and producer.
We should have consistency between these declarations and naming and the topic used always declared through a KafkaTopic resource.

ThreadPoolExecutor for producing data

Hello Strimzi Team it's possible to run a pool of thread to produce data to my cluster with this example i went to attempt 800000 TPS per second. knowing that we should waiting some deley Thread.sleep(config.getDelay());

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.